Preface

Language presentation

This language is not:

  • a general purpose language

  • a Turing-complete language

  • an imperative language

It has no:

  • recursion

  • generator / generic iterator

  • way of looping except on finite list

This language is an Open Source DSL (domain-specific language) targeted at state definition. Everything that is not a state definition is a convenience for easier definition of a state. The compiler is very pedantic to avoid defining invalid states as much as possible.

File extension is rd.

Example:

ntp.rd
@format=0

@name="Configure NTP"
@description="test"
@version = 0
@parameters=[]

resource Configure_NTP()

Configure_NTP state technique() {
  @component = "Package present"
  package("ntp").present("","","") as package_present_ntp
}

Once compiled to CFEngine code:

ntp.rd.cf
# generated by rudderc
# @name Configure NTP
# @description test
# @version 1.0

bundle agent Configure_NTP_technique
{
  vars:
    "resources_dir" string => "${this.promise_dirname}/resources";
  methods:
    "Package present_${report_data.directive_id}_0" usebundle => _method_reporting_context("Package present", "ntp");
    "Package present_${report_data.directive_id}_0" usebundle => package_present("ntp", "", "", "");
}
ntp.rd.ps1
# generated by rudderc
# @name Configure NTP
# @description test
# @version 1.0

function Configure-NTP {
  [CmdletBinding()]
  param (
    [Parameter(Mandatory=$False)] [Switch] $AuditOnly,
    [Parameter(Mandatory=$True)]  [String] $ReportId,
    [Parameter(Mandatory=$True)]  [String] $TechniqueName
  )

  $LocalClasses = New-ClassContext
  $ResourcesDir = $PSScriptRoot + "\resources"
  $LocalClasses = Merge-ClassContext $LocalClasses $(Package-Present -ComponentName "Package Present" -Name "ntp" -Version "" -Architecture "" -Provider "" -ReportId $ReportId -AuditOnly $AuditOnly -TechniqueName $TechniqueName).get_item("classes")
}

Short-term future abilities

  • Error feedback directly in the Technique Editor

  • Enhanced (or refactored):

    • Variable handling (for example in conditions)

Long-term future abilities

  • New keywords including the action, measure, function keywords

  • Fully rewrite the ncf library into a self-sufficient language library

  • Plain integration and usage of language into Rudder whether as code or into the Technique Editor

  • Various improvements and some reworks

Concepts

Resource

  • a resource is an object that sits on the system being configured

  • a resource is defined by a resource type with 0 or more parameters

  • a resource type with 0 parameter defines a unique resource in the system

  • a resource can contain other resources

State

  • a state is an elementary configuration of a resource

  • a state is defined by a name and 0 or more parameter

  • a given resource can have many states at the same time

  • a given resource can only have one state of a given name

  • state application produces a status also named result

Variables and types

  • configurations can be parametered via constants and variables

  • constants and variables have a type

  • types are distinct from resource and state

  • types are all based on basic types: integer, float, string, boolean, array, hashmap

  • a variable cannot contain a resource or a state

Enums and conditions

  • an enum is an exhaustive list of possible values

  • an enum mapping maps all possible values of an enum to another enum

  • a condition is an enum expression

  • an enum expression is a boolean expression of enum comparison

  • enum comparison compares a variable with an enum value with mapping knowledge

  • state application outcome are enums too

  • a case is a list of conditions that must match all possible cases exactly once

  • an if is a single condition

Lexical structure

File structure:

Keywords

The following keywords currently have the functionality described Sorted by category

  • header:

    • @format = .., metadata defines the language version of the file. Indicates when compiling if a version conversion must be performed beforehand

  • enum:

    • enum .., a list of values

    • global .., usage: paired with enum. Means enum values are unique and can be guessed without specifying a type

    • items (in), sub-enums ie. extends an existing enum

    • alias, gives an other name to an enum item

  • types:

  • let .., global variable declaration

  • resource .., to declare a new resource

  • .. state .., to define a new state linked to an existing resource

  • flow operators:

    • if ..

    • case .., list (condition, then)

      • default, calls default behavior of an enum expression. Is mandatory when an enum ends with a *

      • nodefault can be met in a single-cased case switch

  • flow statements:

    • fail .., to stop engine with a final message

    • log_debug .., to print debug data

    • log_info .., to inform the user

    • log_warn .., to warn the user

    • return .., to return a specific result

    • noop, do nothing

Operators

  • @ declares a metadata which is a key / value pair (syntax is @key=value). Cf Metadata

  • \# simple comment

  • #\# parsed comment. #\# comments are considered to be metadata and are parsed and kept.

  • | or, & and, ! not

  • . item in enum

  • .. items range, in enum

  • =~ is included or equal, !~ is not included or not equal. Used when comparing enum expressions

  • ! Audit state application

  • ? Condition state application

Identifiers

Identifiers are variable names given by users, containing only alphanumeric chars.

Identifiers can be:

  • all kind of aliases

  • parameters

  • enum names or enum item names

  • sub-enum names or sub-enum item names

  • metadata names

  • resource names and resource reference names

  • state names

  • variable names

  • agent variable names and values

Identifiers can be invalid. They cannot be:

  • an already declared identifier in a given namespace

  • a CFEngine core variable (see file libs/cfengine_core.rd)

  • the name of our types

    • "string"

    • "num"

    • "boolean"

    • "struct"

    • "list"

  • a language keyword (see keywords)

    • "let"

  • a reserved keyword for future usage

    • "format"

    • "comment"

    • "dict"

    • "json"

    • "enforce"

    • "condition"

    • "audit"

An invalid variable is:

  • invalid identifiers

  • enum names

  • global enum item names

  • resource names

  • "true" / "false"

Comments

There are two kind of comments:

  • simple comments # that are not parsed and not stored. They are comments in the common sense : only useful for the developer from inside the .rd file

  • parsed comments ## that are considered to be metadatas. They are parsed and stored as such, and will be used by the compiler in upcoming versions

Metadata

Metadatas allow to extend the language and the generation process and give the user the ability to store structured data with resources. Hence metadata that can be anything available in the language

Types

string

language supports multiline string format, interpolation, escaping sequences:

  • an escaped string is delimited by "

  • an unescaped string is delimited by """

  • interpolation has the following syntax: ${…​}

  • supported escape sequences: \\, \n, \r, \t

integer

Internally represented by a 64-bit signed integer.

float

Internally represented by a double-precision 64-bit floating point number.

boolean

true or false

Internally represented by the boolean exhaustive enum

struct

Structs are delimited by curly braces {…​}, are composed by pairs of key: value and use commas (,) as separators

list

Lists are delimited by square brackets […​] and use commas (,) as separators

Items

An item is a component of language.

As explained in a previous chapter after the header come declaration and definition of rudder lang items.

Note
- Item declaration means informing the compiler something now exists, with a given name and optionally type
- Item definition means giving the said variable a value

Before defining variables, an overview of the language keywords as well as a better understanding of types, operators and enums is required.

Local vs global scope

Some items are considered to be declared or defined globally

Two items cannot have the same name in a given scope. It implies that no local variable can be defined with the same name than a global item since the latter is by definition available from every single scope

Declaration and definition patterns

Most item definition patterns look like:

# comment # optional
@metadata="value" # optional

type identifier(parameter_list) # lists are wrapped into `[` or `{` or `(`

Unless specified otherwise, comments and metadata are allowed.

List of possible definitions:

  • enum

    • enum definition

    • sub-enum definition

    • enum alias is a declaration not a definition

  • resource definition

  • state definition

  • variable (global) definition

  • alias definition

  • agent variable (global) is a declaration not a definition

Note
an identifier (abbr: ident) is a word composed of alphanumeric characters and underscores (_). All variable names, parameters, enum fields, aliases, are parsed as identifiers.
Note
a value can be any language type

definition: enum

An enum is a list of values, a bit like a C enum. See enums to have an understanding on how language enums work.

Examples:

Exhaustive enum:

enum boolean {
  true,
  false
}

Global, non-exhaustive enum:

global enum system {
  windows,
  linux,
  aix,
  bsd,
  hp_ux,
  solaris,
  *
}

definition: sub-enum

Sub enums extend an existing enum item, adding it children

Note
sub-enums derived from global enum inherit the global property
items in aix {
  aix_5,
  aix_6,
  aix_7,
  *
}

Items can have sub-enums on their own

items in debian_family {
  @cfengine_name="(debian.!ubuntu)"
  debian,
  ubuntu,
  # Warning: update debian if you make change here
  *
}

Note that each element can be supplemented by a comment or metadata.

declaration: enum alias

Can be defined aliases of enum items

Enum alias:
enum alias ident = enum_name
Enum item alias: enum alias ident = enum_name.item

definition: resource

A resource can also be defined with a parent linking: resource ident(p1, p2): ident

resource ident(p1, p2)

definition: state

A state extends a resource and is private to the resource context

State definition model:

resource_name state state_name(p1, p2) {
  # statements
}

Read more about statements here

Examples:

Configure_NTP state technique() {
  @component = "Package present"
  package("ntp").present("","","") as package_present_ntp
}

The Configure_NTP is extended by a new state called technique, receiving no parameters since its content (called statement) does not require any

Another example to illustrate parametered states:

@metadata="value"
ntp state configuration (to_log="file is absent")
{
  file("/tmp").absent() as abs_file
  if abs_file =~ kept => log "info: ${to_log}"
}

In the above example there is a local state declaration and a condition leading to an action

Note
state declaration is always part of a statement whereas state definition is a top level feature

declaration & definition: variable

No comment or metadata allowed

Variables are declared using the let keyword and optionally define them inline

let ident = "value" or let my_var = other_var or any type language handles

Declaration of namespaces is possible:

let namespace1.namespace2.ident

definition: alias

Aliases allow to have elements of the resource and state pair both renamed

Example:

alias resource_alias().state_alias() = resource().state()

Enums

Enums are not properly language types yet they are a full feature and have a defined syntax

enum vs global enum

An enum can be global. There are some key differences:

difference

enum

global enum

globally unique items

no

yes

enum type must be specified at call time

yes

no

each item has a global variable associated*

no

yes

All item names of a global enum are globally unique and usable as is, meaning it becomes reserved, no other variable can be created with this name.

In other words, item names of global enums are directly available in the global namespace.

# arbitrary system list
global enum system {
  windows,
  linux
}

To call an item of this enum, just type linux (rather than system.linux) as it exists in the global namespace.

Still it remains different from a single variable since internally a reference to the enum tree is kept

Access to enum content

It is possible to access an enum item or range of items.

Note
enum ranges are not sorted therefore range order is the same as enum definition order

Depending on the enum being global or not, it is possible to directly call items, since global enums declare a variable for each of its items

  • item: enum.item or item if the enum is global

  • range: is expressed this way:

    • enum_item..

    • enum_item..enum_item2

    • ..enum_item

Example:

# arbitrary system list that is not global
enum system {
  windows,
  linux,
  aix,
  bsd
}

`if linux =~ system.linux` # is true
`if linux =~ ..system.windows` # is false
`if windows !~ system.linux..system.bsd` # is true
`if aix =~ system.linux..` # is true

Statements and Expressions

Statements

Statements are the atomic element of a state definition. It is an important concept that can be represented as:

  • a state declaration: resource().state() as mystatedecl to store a state into a local variable that can be called

  • a variable definition: let mynamespace.myvar: OptionalType = "value". Variable can hold any supported type

  • a variable extension: mynamespace.myvar = value to update value of an existing variable

  • a (switch) case. cf case conditions

  • an if condition, that contains an enum expression: if expr ⇒ statement. cf if conditions

  • a flow statement: return log_debug log_info log_warn fail noop Example of a state definition that exposes every statement type:

@format=0

resource deb()

deb state technique()
{
  # list of possible statements
  @info="i am a metadata"
  let rights = "g+x"


  permissions("/tmp").dirs("root", "x$${root}i${user}2","g+w") as outvar

  if outvar=~kept => return kept

  case {
    outvar=~repaired  => log_info "repaired",
    outvar=~error => fail "failed agent",
    default => log_info "default case"
  }
}

if conditions

enum range or item access explained here access to enum content

syntax: if expression ⇒ statement

case conditions

Work the same way switch cases do in other languages

Syntax:

case {
  case_expression => statement,
  default => statement ## optional unless enum is global
}

case expressions are mostly standard expressions so they handle &, |, !, (..), default the same way. Only difference being cases have an additional nodefault expression that silently comes with a noop statement

Expressions

Expressions are composed of boolean expressions based on enum comparison
Their purpose is to check whether the variable is of the right type and contains the provided item as a value, or an ancestor item if there is any NOTE: default is a value that is equivalent of true

Expressions are a composition of the following elements:

  • or: expr | expr

  • and: expr & expr

  • not: !expr

  • parentheses: (expr) to handle priorities between expressions

  • default: default keyword that automatically comes with a noop statement

  • expression comparison:

    • var =~ enum variable is equivalent to enum range or item

    • var !~ enum variable is not equivalent to enum range or item

    • implicit boolean comparison that only takes an expression (for example !linux & !windows)

Note
see enum related syntax here, including items and range and expression examples

Blocks

Blocks are used to organize several state calls under a common statement or to modify their reporting logic. They are exactly similar to the one found in Rudder, in the technique editor. They can be nested and are delimited by braces {} without keyword.

# Without any statement
@component = "My block component name"
@reporting_logic = "weighted"          ## Mandatory metadata for any block definition
{
  @component = "My method component name"
  command("""my command""").execution() as my_command_succeeds
}
# With statement
if gdm_present_ok =>                   ## The statement must be placed before the metadata
@component = "My block component name"
@reporting_logic = "weighted"          ## Mandatory metadata for any block definition
{
  @component = "My method component name"
  command("""my command""").execution() as my_command_succeeds
}

The reporting_logic metadata can take the following values:

  • weighted: Keep the classical reporting

  • focus:<state id>: The whole block reporting will take the compliance value of the selected state

    • To choose the targeted state, add an id metadata to the state definition and reference its value instead of <state id>.

      @component = "My block component name234"
      @id = "90fbf043-11a8-49c6-85ad-88e65ea36f9a"
      @reporting_logic = "focus:693e80c4-78f2-43c3-aace-3c6b5e0e08b8"
      {
        @component = "My method component name"
        @id = "693e80c4-78f2-43c3-aace-3c6b5e0e08b8"
        command("""/bin/true""").execution() as command_execution__bin_true
      }
  • worst-case-weighted-one: The whole block reporting will take the compliance value of the worst report (will count as 1 report in the compliance computation)

  • worst-case-weighted-sum: The whole block reporting will take the compliance value of the worst report, weighted by the number of methods (will count as N reports in the compliance computation, with N equals to the number of methods in the block)

Appendices

Libraries

stdlib

What is called stdlib (language own standard library) is the following set of files.
Folder: ./libs:

resourcelib.rd

Contains the list of available methods. A method is composed of:

  • a resource

  • its relative states

  • the resource parameters

  • states own parameters

corelib.rd

Contains standard enums available to language users like the general purpose enums boolean and result

oslib.rd

Contains the exhaustive list of supported operating systems, including major and minor versions.
Stored in the form of nested `enum`s.

More about supported OSes and their usage in language here: supported OSes

cfengine_core.rd

Contains the exhaustive list of CFEngine reserved words and namespaces.
Required since these words cannot be created by language to avoid any conflicts with CFEngine.

Operating systems

Since language is a language designed to configure servers, operating systems are an important part of it

A part of the stdlib is dedicated to declare a structured list of handled operating systems in the form of enums

This chapter is about explaining how to use it

Syntax

OS list construction
  • Underscore is used as a separator _

  • 4 accuracy layers, all following this syntax rule:

    • systemlinux, etc

    • osubuntu, etc

    • os_majorubuntu_20, etc

    • os_major_minorubuntu_20_04, etc

Language syntax

Since language OS list is composed of enums, it meets the requirements that are specific to enums:

  • a top layer, that is the global enum system

  • sub-enums that expand their parent item: items in linux or items in ubuntu_20

  • aliases can be used to define any sub-enum, like: enum alias focal = ubuntu_20_04

Note
enums are compatible with metadatas, not only outter but also inner elements.

More about enums

Usage

language makes use of an exhaustive list of operating systems, including major and minor versions.
This list is defined in the stdlib (more about it here)

For now they are used in conditions to check whether a method should be applied or not.

Several degrees of accuracy can be chosen when defining a condition:

  • system (kernel): windows, linux, bsd, etc

  • operating system: ubuntu, windows_server, etc

  • major version: for example, ubuntu_20

  • minor version: for example, ubuntu_20_04

Yet any sub-enum is a standalone, meaning it is directly defined on its own: ubuntu_20.

Note
The fact ubuntu_20 is part of ubuntulinuxsystem is only important for accuracy sake: if linux evaluates to true on ubunutu_20
Example with ubuntu_20_10 as the targeted OS

The following expressions will be evaluated to true:

  • if linux

  • if ubuntu

  • if ubuntu_20

The following expressions will be evaluated to false:

  • if windows

  • if ubuntu_20_04

Usage

There are two ways to interact with language: directly from the terminal or through the Technique Editor

Using the command line interface (CLI)

Installation

language program is called rudderc, standing for Rudder Compiler

To start working with language, install a beta agent (see rudder agent installation (debian), other OSes guides available)

rudderc being a part of the agent, it is now installed at the following location: /opt/rudder/bin/rudderc

Optionally add rudderc to your path export PATH=$PATH:/opt/rudder/bin/rudderc to simply run it with the following command: rudderc

Usage

rudderc has 4 features, called commands that will generate code. The command you need fully depend on the format you have and the output format you want:

  • compile: generates either a DSC / CFEngine technique from a RudderLang technique

  • save: generates a RudderLang technique from a JSON technique (same object format Technique Editor produces)

  • technique read: generates a JSON technique (same object format Technique Editor produces) from a RudderLang technique

  • technique generate: generates a JSON object that comes with RudderLang + DSC + CFEngine technique from a JSON technique It is worth noting that --stdin and --stdout options are default behavior for technique generate and technique read

JSON output ( which includes logs) is handled. It is optional for compile and save but is default behavior for technique read and technique generate. By default all logs but error are printed to STDOUT. error logs are printed to STDERR.

rudderc abilities
The CLI usage (rudderc --help or rudderc -h output slightly modified)
Rudderc (4) available commands are callable through subcommands, namely <technique read>, <technique generate>, <save>, <compile>,
allowing it to perform generation or translation from / into the following formats : [JSON, RudderLang, CFengine, DSC].

Run `rudderc <SUBCOMMAND> --help` to access its inner options and flags helper

Example: rudderc technique generate -c confs/my.conf -i techniques/technique.json -f rudderlang

USAGE:
    rudderc <SUBCOMMAND>

FLAGS:
    -h, --help
            Prints help information

    -V, --version
            Prints version information


SUBCOMMANDS:
    compile      Generates either a DSC / CFEngine technique (`--format` option) from a RudderLang technique
    help         Prints this message or the help of the given subcommand(s)
    save      Generates a RudderLang technique from a JSON technique
    technique    A technique can either be used with one of the two following subcommands: `read` (from rudderlang
                 to json) or `generate` (from json to cfengine or dsc or rudderlang)

rudderc commands all share several flags and options .Shared FLAGS / OPTIONS:

FLAGS:
    -b, --backtrace    Generates a backtrace in case an error occurs
    -h, --help         Prints help information
        --stdin        Takes stdin as an input rather than using a file. Overwrites input file option
        --stdout       Takes stdout as an output rather than using a file. Overwrites output file option. Dismiss logs directed to stdout. Errors are kept since they are printed to stderr
    -V, --version      Prints version information

OPTIONS:
    -c, --config-file <config-file>    Path of the configuration file to use. A configuration file is required
                                       (containing at least stdlib and generic_methods paths) [default:
                                       /opt/rudder/etc/rudderc.conf]
    -i, --input <file>                 Input file path.
                                       If option path does not exist, concat config input with option.
    -l, --log-level <log-level>        rudderc output logs verbosity [default: warn]  [possible values: off, trace,
                                       debug, info, warn, error]
    -o, --output <file>                Output file path.
                                       If option path does not exist, concat config output with option. Else base output on input.

But some commands come with their own flags and options (listed below) on top of the previously mentioned:

The first command, compile:
Generates a JSON object that comes with RudderLang + DSC + CFEngine technique from a JSON technique

USAGE:
    rudderc compile [FLAGS] [OPTIONS]

FLAGS:
    -j, --json-logs                     Use json logs instead of human readable output
                                        This option will print a single JSON object that will contain logs, errors and generated data (or the file where it has been generated).
                                        Whichever command is chosen, JSON output format is always the same.
                                        However, some fields (data and destination file) could be set to `null`, make sure to handle `null`s properly
                                        Note that NO_COLOR specs apply by default for json output.
                                        Also note that setting NO_COLOR manually in your env will also work

OPTIONS:
    -f, --format <format>              Enforce a compiler output format (overrides configuration format).
                                       [possible values: cf, cfengine, dsc, json]

...
The second command, save:
Generates a RudderLang technique from a JSON technique

USAGE:
    rudderc save [FLAGS] [OPTIONS]

FLAGS:
    -j, --json-logs    Use json logs instead of human readable output
                       This option will print a single JSON object that will contain logs, errors and generated data (or the file where it has been generated).
                       Whichever command is chosen, JSON output format is always the same. However, some fields (data and destination file) could be set to `null`, make sure to handle `null`s properly
                       Note that NO_COLOR specs apply by default for json output.
                       Also note that setting NO_COLOR manually in your env will also work
The third command, technique read (read is a subcommand of the technique subcommand):
Generates a JSON technique from a RudderLang technique

USAGE:
    rudderc technique read [FLAGS] [OPTIONS]

...
The fourth command, technique generate (generate is a subcommand of the technique subcommand):
Generates a JSON object that comes with RudderLang + DSC + CFEngine technique from a JSON technique

USAGE:
    rudderc technique generate [FLAGS] [OPTIONS]

...

Most options are pretty straightforward but some explanations might help:

  • Flags and options must be written in kebab-case

  • A configuration file is required because rudderc needs its own libraries to work (default path should point to an already working Rudder configuration if rudder agent was installed like previously suggested)

  • Configuration can define flags and options but CLI will always overwrite config defined ones. ie: CLI --output > config output

  • --stdin > --input

  • --stdout > --output > input as destination with updated extension

  • --format > --output technique extension

  • --log-levels are ordered (trace > debug > info > warn > error) which means info includes warn and error

  • --stdin is designed to work with pipes (ex: cat file.rd | rudderc compile -c file.conf -f cf`), it won’t wait for an input. Higher priority than --input option

  • --stdout will dismiss any kind of logs, including errors. Only thing that will be printed to terminal is the expected result. If empty, try again with a log, there is an error. Higher priority than --output option

Options: how are input, output and format dealt with:

Internally for input the compiler looks for an existing file until it founds one, in the following order: * solely from the CLI input option * join configuration input as dir + CLI input option * solely from the configuration input (if the file exists) * if none worked, error

Internally for output, the compiler looks for an existing path to write a file on, until it founds one: * solely from the CLI output option * join configuration output as dir + CLI output option * solely from the configuration output * uses input and only updates the extension * if none worked, error

Internally for format when required (compile): * for any command but compile, format is set by the program * compile command: explicit CLI --format option. Note that values are limited. * compile command: output file extension is used * if none worked, error

Configuration file

A configuration file is required because rudderc needs its own libraries to work.

Entire language environment is already set up alongside the agent: this includes all needed libraries and a configuration file with preset paths.

default configuration file
[shared]
stdlib="libs/"
cfengine_methods="repos/ncf/tree/30_generic_methods/"
alt_cfengine_methods="repos/dsc/plugin/ncf/30_generic_methods/"
dsc_methods="repos/dsc/packaging/Files/share/initial-policy/ncf/30_generic_methods/"

[compile]
input="tests/techniques/simplest/technique.rd"
output="tests/techniques/simplest/technique.rd.cf"

[save]
input="tests/techniques/simplest/technique.cf"
output="tests/techniques/simplest/technique.cf.rd"

[technique_read]
input="tests/techniques/simplest/technique.rd"
output="tests/techniques/simplest/technique.rd.json"

[technique_generate]
input="tests/techniques/simplest/technique.json"
output="tests/techniques/simplest/technique_array.json"

[testing_loop]
cfengine="/opt/rudder/bin/cf-promises"
ncf_tools="repos/ncf/tools/"
py_modules="tools/"

The configuration file can be used to shorten arguments.

There is a table for each command (compile, technique_read, technique_generate, save), that can hold their own two limited fields: input and output. Meaningful usage is that these two fields are paths that are completed by CLI filenames: --input <file> / --output <file> CLI options. In other words: config options are paths (directories), to which is joined the cli option. But configure it using a file and not use the CLI options will work.

Compilation examples

Below, 5 ways to use the compiler

Required: a config file to work on a local environment:
tools/my.conf
[shared]
stdlib="libs/"
cfengine_methods="repos/ncf/tree/30_generic_methods/"
alt_cfengine_methods="repos/dsc/plugin/ncf/30_generic_methods/"
dsc_methods="repos/dsc/packaging/Files/share/initial-policy/ncf/30_generic_methods/"
CLI full version
rudderc compile --json-log --log-level debug --config-file tools/my.conf --input tests/techniques/technique.rd --output tests/techniques/technique.rd.dsc --format dsc
CLI shortened version
rudderc compile -j -l debug -c tools/my.conf -i tests/techniques/technique.rd -f dsc

What it means:

  • Compiles tests/techniques/technique.rd (-i) into tests/techniques/technique.rd.dsc (output based on input),

  • Use the configuration file located at ./tools/my.conf (-c),

  • Output technique format is DSC (--format). Note that this parameter is optional since -d defines the right technique format by its extension

  • Output log format is JSON (-j),

  • The following log levels: error, warn, info, debug will be printed to the terminal

CLI + config shortened version

By using an adapted configuration file, it can be simplified:

tools/myconf
[shared]
    stdlib="libs/" # only required field for rudderc

[compile]
    input="tests/techniques/"
    output="tests/techniques/"

Lightest compilation using CLI.

rudderc -j -l debug -c tools/myconf -i technique.rd

Input will be a concatenation of config and cli: tests/techniques/technique.rd. Output still based on input.

config + CLI shortest version

By using an adapted configuration file, it can be simplified:

tools/myconf
[shared]
    stdlib="libs/" # only required field for rudderc

[compile]
    input="rl/technique.rd"
    output="dsc/technique.rd.dsc"

Lightest compilation using CLI.

rudderc -j -l debug -c tools/myconf
JSON Output

If you decided to go with the --json-output option, it means output will consist of a single JSON object:

STDOUT
{
  "command": "compile",
  "time": "1600331631367",
  "status": "success",
  "source": "tests/techniques/simplest/technique.rd",
  "logs": [],
  "data": [
    {
      "format": "DSC",
      "destination": "tests/techniques/6.1.rc5/technique.dsc",
      "content": null
    }
  ],
  "errors": []
}
  • Output always use the same squeleton which is the one you just read.

  • data field:

    • Length always 0 in case of error # TODO check for technique generate

    • Length always 3 when technique generate called

    • Length always 1 in any other case since other commands only generate 1 format

  • content field is null if its content has successfully been written to a file

  • destination field is null if content is directly written in the JSON

  • errors field is an array of strings # TODO log field

Writing a Rudder technique

Rudder language techniques can be compiled to a Json file which can be imported in Rudder in the technique editor or using the API. Note that rudder language and the technique editor are not yet fully compatible. Some features will only be available in pure rudder language (without importing the technique to Rudder) and some will only be available by creating a technique through the GUI.

In the restricted case where you want to import a rudder language technique to the technique editor, the known limitations are:

  • Only one resource by technique, which is the technique (as understand by the technique editor)

  • A state declaration in rudder language equals to a method call in the technique editor

  • Resulting conditions must be manually set and equals to the default ones from the technique editor

  • Technique resources can not yet be defined this way

  • The resource describing the technique must be prefixed by technique_

Example

Let’s write a very simple technique in rudder language:

technique.rd
@format=0
@name="Deploy vim" // (1)
@description="Make sure vim is installed everywhere"
@category = "ncf_techniques"
@version = "1.0" // (2)
@parameters= []
resource technique_Install_Vim() // (3)

technique_Install_Vim state technique() {
        @component = "Package is installed" // (4)
        package("vim").present("","","") as package_present_vim
}
  1. Technique name

  2. Technique version (only 1.0 is currently supported)

  3. Resource declaration followed by its implementation The resource name must be prefix by technique_ and will defined the technique id

  4. The component metadata set the reporting for the method call/state

Which can then be compiled to a technique.json file understandable by the technique editor:

/opt/rudder/bin/rudderc technique read -i "technique.rd"

And import the result via the API or the GUI:

curl --silent -k --header "X-API-Token: $(cat /var/rudder/run/api-token)" --header "Content-type: application/json" --request PUT https://localhost/rudder/api/latest/techniques --data "@technique.json" | jq ''

Using the technique editor

rudderc is called from the Technique Editor as a backend program every time a technique is saved. For now it only is a testing loop. Once fully released, every technique will directly be saved using language

Note
This testing loop generates two CFEngine techniques, one using the usual ncf framework and an other one using language. The two are then compared.

Since the Technique Editor is meant to simplify methods generation no language code is written (the language is fully abstracted). It is used as an internal CFEngine generator

Integration to Rudder

Right now language is in a testing state. It has been released alongside Rudder 7.0.

This means while language CLI features are usable, its current state as part of the released product is mainly: working in parallel with our legacy genertion tools, to monitor how robust language is and make sure it would not break things (and fix it when it would).

We made sure this integration is made flowlessly for the end user and does not come with any breaking change. More about the way we are integrating it in the next section (Testing Loop).

Once we ae confident enough with our new compiler, it will replace the actual generation process.

Testing loop

Every time a technique is saved from the Technique Editor (which outputs a json dsc and cfengine technique), a script does the following:

  • generate in a temporary folder cf json and rd files by calling libraries and rudderc compiler.

  • compare the original files with rudderc generated files.

  • check and report (to /var/log/rudder/language/${technique_id}/*) differences and potential compilation or scripting errors.

How to: disable automated usage of language testing loop

While rudderc is currently generating testing files only which means it’s totally safe for the user, the parallel compilation can be disabled.

Open the following file: /opt/rudder/etc/rudder-web.properties and set rudder.lang.test-loop.exec=true to false.

rudderc won’t be called anymore until reactivation

How to: manual testing

It is possible to directly take a look at the techniques generated via language, by calling the testing loop on your own for the Rudder server command line.

The testing script location is /opt/rudder/share/language/tools/tester.sh. The script takes the following parameters:

  • An optional --keep parameter that forces techniques generated via language to be saved in a folder located in /tmp (a new folder is generated for every loop). The folder is actually echoed by the script so you can look at its content.

  • The mandatory: technique name (actually the Technique ID, can be found in the Technique Editor)

  • The mandatory: technique category under the form of a path. For example, default category is ncf_techniques.

The idea is that with those 2 mandatory parameters, the script can make up the technique correct path to the directory holding the technique: /var/rudder/configuration-repository/techniques/${technique_category}/${technique_id}/1.0/

Example:

To test the new_tech technique, located at: /var/rudder/configuration-repository/techniques/systemSettings/networking/new_tech/1.0/technique(.cf)

$ /opt/rudder/share/language/tools/tester.sh --keep new_tech systemSettings/networking
  Done testing in /tmp/tmp.LFA6XjxkuF

$ ls -1 /tmp/tmp.LFA6XjxkuF                                                                                                                           [±ust_17738/add_doc_about_logs_and_generated_techniques ●]
new_tech.json                                   # generated by ncf
new_tech.cf                                     # generated by ncf (for now)
new_tech.ps1                                    # generated by ncf (for now)
new_tech.rd                                     # generated by rudderc --save
new_tech.rd.cf                                  # generated by rudderc --compile -f cf
new_tech.rd.ps1                                 # generated by rudderc --compile -f dsc

$ ls -1 /var/log/rudder/language/new_tech/   # generated logs for the technique testing loop (explicit names)
compare_json.log                                ### these logs hold any error or difference that is unexpected
compare_cf.log                                  ### if no error / difference is found, the log file will not be generated
compare_dsc.log                                 ### if no error / difference is found, the log file will not be generated
rudderc.log

Standard library

By default, resource and state parameters:

  • cannot be empty

  • cannot contain only white-spaces

  • have a max size of 16384 chars

Exceptions are explicitly specified in the doc.

States marked as actions represent actions that will be executed at every run. You should generally add a condition when using them.

command

resource command(command)
  • command: Command to run

States

execution_result [unix]

Execute a command and create result conditions depending on its exit code

command(command).execution_result(kept_codes, repaired_codes)
  • kept_codes: List of codes that produce a kept status separated with commas (ex: 1,2,5)

  • repaired_codes: List of codes that produce a repaired status separated with commas (ex: 3,4,6)

Execute a command and create result conditions depending on the exit codes given in parameters. If an exit code is not in the list it will lead to an error status. If you want 0 to be a success you have to list it in the kept_codes list


execution_once [unix]

Execute a command only once on a node

command(command).execution_once(ok_codes, until, unique_id)
  • ok_codes: List of codes that produce a repaired status separated with commas (ex: 1,2,5). Defaults to 0.

    • can be empty

  • unique_id: To identify the action without losing track if the command changes. Defaults to the command if you don’t need it.

    • can be empty

  • until: Try to execute the command until a particular state: 'ok', 'any' (defaults to 'any')

    • empty, any, ok

This method is useful for specific commands that should only be executed once per node.

If you can spot a condition for the command execution by testing the state of its target, it is better to use the condition_from_command method to test the state coupled with the command_execution_result method to run the command if necessary.

The method will:

Define the command_execution_once_${command}_kept condition and do nothing if a command_execution_once has already been executed on this machine with the same Unique id.

Execute the command if it is the first occurrence and:

  • If the parameter Until is any, it will consider the command as executed on the machine and define either:

    • command_execution_once_${command}_repaired if the return code is in ok_codes,

    • command_execution_once_${command}_error otherwise.

  • If the parameter Until is ok and:

    • If the return code is in the Ok codes list, define the command_execution_once_${command}_repaired condition

    • If the return code is not in Ok codes it define the command_execution_once_${command}_error condition and retry at next agent run.

If an exit code is not in the list it will lead to an error status. If you want "0" to be a success you have to list it in the Ok codes list

Example:

If you use:

    command_execution_once("command -a -t", "0", "ok", "my_program_setup")

It will retry to run command -a -t until it returns "0". Then it will not execute it again.


execution [windows, unix]

Execute a command

command(command).execution()

Execute the Command in shell. On DSC agent, the Command in executed through the Powershell & operator.

The method status will report:

  • a Repaired if the return code is "0",

  • an Error if the return code is not "0"


condition

resource condition(condition)
  • condition: Prefix of the class (condition) generated

States

once [unix]

Create a new condition only once

condition(condition).once()

This method define a condition named from the parameter Condition when it is called for the first time. Following agent execution will not define the condition.

This allows executing actions only once on a given machine. The created condition is global to the agent.

Example:

If you use:

condition_once("my_condition")

The first agent run will have the condition my_condition defined, contrary to subsequent runs for which no condition will be defined.


from_variable_match [windows, unix]

Test the content of a string variable

condition(condition).from_variable_match(variable_name, expected_match)
  • expected_match: Regex to use to test if the variable content is compliant

  • variable_name: Complete name of the variable being tested, like my_prefix.my_variable

Test a variable content and create conditions depending on its value:

  • If the variable is found and its content matches the given regex:

    • a ${condition}_true condition,

    • and kept outcome status

  • If the variable is found but its content does not match the given regex:

    • a ${condition}_false condition,

    • and a kept outcome status

  • If the variable can not be found:

    • a ${condition}_false condition

    • and an error outcome status

/!\ Regex for unix machine must be PCRE compatible and those for Windows agent must respect the .Net regex format.

  • If you want to test a technique parameter, use the technique_id of the technique as variable prefix and the`parameter_name` as variable name.


from_variable_existence [windows, unix]

Create a condition from the existence of a variable

condition(condition).from_variable_existence(variable_name)
  • variable_name: Complete name of the variable being tested, like my_prefix.my_variable

This method define a condition:

  • {condition}_true if the variable named from the parameter Variable name is defined

  • {condition}_false if the variable named from the parameter Variable name is not defined

Also, this method always result with a success outcome status.


from_expression_persistent [unix]

Create a new condition that persists across runs

condition(condition).from_expression_persistent(expression, duration)
  • duration: The persistence suffix in minutes

  • expression: The expression evaluated to create the condition (use 'any' to always evaluate to true)

This method evaluates an expression (=condition combination), and produces a ${condition}_true or a ${condition}_false condition depending on the result on the expression, which will lasts for the Duration time:

  • This method always result with a success outcome status

  • If the expression evaluation results in a "defined" state, this will define a ${condition}_true condition,

  • If the expression evaluation results in an "undefined" state, this will produce a ${condition}_false condition.

Calling this method with a condition expression transforms a complex expression into a single class condition.

The created condition is global to the agent and is persisted across runs. The persistence duration is controlled using the parameter Duration which defines for how long the target condition will be defined (in minutes). Note that there is no way to persist indefinitely.

Example:

If you want to check if a condition evaluates to true, like checking that you are on Monday, 2am, on RedHat systems, and make it last one hour you can use the following policy

condition_from_expression_persistent_("backup_time", "Monday.redhat.Hr02", "60")

The method will define:

  • In any case:

    • condition_from_expression_persistent_backup_time_kept

    • condition_from_expression_persistent_backup_time_reached

  • And:

    • backup_time_true if the system is a RedHat like system, on Monday, at 2am, and will persist for Duration minutes,

    • backup_time_false if the system not a RedHat like system, or it’s not Monday, or it’s not 2am

    • no extra condition if the expression is invalid (cannot be parsed)

Notes:

Rudder will automatically "canonify" the given Condition prefix at execution time, which means that all non [a-zA-Z0-9_] characters will be replaced by an underscore.


from_expression [unix]

Create a new condition

condition(condition).from_expression(expression)
  • expression: The expression evaluated to create the condition (use 'any' to always evaluate to true)

This method evaluates an expression, and produces a ${condition}_true or a ${condition}_false condition depending on the result of the expression evaluation:

  • This method always result with a success outcome status

  • If the evaluation results in a "defined" state, this will define a ${condition}_true condition,

  • If the evaluation results in an "undefined" state, this will produce a ${condition}_false condition.

Calling this method with a condition expression transforms a complex expression into a single condition.

The created condition is global to the agent.

Example

If you want to check if a condition evaluates to true, like checking that you are on Monday, 2am, on RedHat systems, you can use the following policy

condition_from_expression("backup_time", "Monday.redhat.Hr02")

The method will define:

  • In any case:

    • condition_from_expression_backup_time_kept

    • condition_from_expression_backup_time_reached

  • And:

    • backup_time_true if the system is a RedHat like system, on Monday, at 2am.

    • backup_time_false if the system not a RedHat like system, or it’s not Monday, or it’s not 2am

    • no extra condition if the expression is invalid (cannot be parsed)

Notes:

Rudder will automatically "canonify" the given Condition prefix at execution time, which means that all non [a-zA-Z0-9_] characters will be replaced by an underscore.


from_command [windows, unix]

Execute a command and create result conditions depending on its exit code

condition(condition).from_command(command, true_codes, false_codes)
  • command: The command to run

  • false_codes: List of codes that produce a false status separated with commas (ex: 3,4,6)

  • true_codes: List of codes that produce a true status separated with commas (ex: 1,2,5)

This method executes a command, and defines a ${condition}_true or a ${condition}_false condition depending on the result of the command:

  • If the exit code is in the "True codes" list, this will produce a kept outcome and a ${condition}_true condition,

  • If the exit code is in the "False codes" list, this will produce a kept outcome and a ${condition}_false condition,

  • If the exit code is not in "True codes" nor in "False codes", or if the command can not be found, it will produce an error outcome and and no condition from ${condition}

The created condition is global to the agent.

Example:

If you run a command /bin/check_network_status that output code 0, 1 or 2 in case of correct configuration, and 18 or 52 in case of invalid configuration, and you want to define a condition based on its execution result, you can use:

condition_from_command("network_correctly_defined", "/bin/check_network_status", "0,1,2", "18,52")
  • If the command exits with 0, 1 or 2, then it will define the conditions

    • network_correctly_defined_true,

    • condition_from_command_network_correctly_defined_kept,

    • condition_from_command_network_correctly_defined_reached,

  • If the command exits 18, 52, then it will define the conditions

    • network_correctly_defined_false,

    • condition_from_command_network_correctly_defined_kept,

    • condition_from_command_network_correctly_defined_reached

  • If the command exits any other code or is not found, then it will define the conditions

    • condition_from_command_network_correctly_defined_error,

    • condition_from_command_network_correctly_defined_reached

Notes:

  • In audit mode, this method will still execute the command passed in parameter. Which means that you should only pass non system-impacting commands to this method.

  • Rudder will automatically "canonify" the given Condition prefix at execution time, which means that all non [a-zA-Z0-9_] characters will be replaced by an underscore.


directory

resource directory(path)
  • path: Directory to remove

States

present [windows, unix]

Create a directory if it doesn’t exist

directory(path).present()

Create a directory if it doesn’t exist.


check_exists [unix]

Checks if a directory exists

directory(path).check_exists()

This bundle will define a condition directory_check_exists_${path}_{ok, reached, kept} if the directory exists, or directory_check_exists_${path}_{not_ok, reached, not_kept, failed} if the directory doesn’t exists


absent [windows, unix]

Ensure a directory’s absence

directory(path).absent(recursive)
  • recursive: Should deletion be recursive, "true" or "false" (defaults to "false")

    • can be empty

If recursive is false, only an empty directory can be deleted.


dsc

resource dsc(tag)
  • tag: Name of the configuration, for information purposes

States

from_configuration [windows]

Compile and apply a given DSC configuration defined by a ps1 file

dsc(tag).from_configuration(config_file)
  • config_file: Absolute path of the .ps1 configuration file

Compile and apply a given DSC configuration. The DSC configuration must be defined within a .ps1 file, and is expected to be "self compilable". A configuration data file (.psd1) containing variables can also be referenced by the ps1 script, by referring to it in the Configuration call.

The method will try to compile the configuration whenever the policies of the nodes are updated of if the previous compilation did not succeed.

All the Rudder variables are usable in your configuration.

Also the current method only allows DSC configurations to be run on the localhost target node, and when using a DSC push setup. Note that it may conflict with already existing DSC configurations not handled by Rudder.

Example 1 - without external data

Here is a configuration named EnsureWebServer.ps1 with simple Windows feature management:

Configuration EnsureWebServer {
 Node 'localhost' {
   # Install the IIS role
   WindowsFeature IIS {
       Ensure       = 'Present'
       Name         = 'Web-Server'
   }

   # Install the ASP .NET 4.5 role
   WindowsFeature AspNet45 {
       Ensure       = 'Present'
       Name         = 'Web-Asp-Net45'
   }
 }
}

EnsureWebServer
Example 2 with external data

Dsc configurations can be fed with external data, here is an example using a datafile Data.psd1 containing:

 @{
     AllNodes = @();
     NonNodeData =
     @{
       ConfigFileContents = "Hello World! This file is managed by Rudder"
     }
 }

Used to feed the HelloWorld.ps1 Contents key:

Configuration HelloWorld {
  Import-DscResource -ModuleName 'PSDesiredStateConfiguration'

  Node 'localhost' {
    File HelloWorld {
        DestinationPath = "${RudderBase}\HelloWorld.txt"
        Ensure          = "Present"
        Contents        = $ConfigurationData.NonNodeData.ConfigFileContents
    }
  }
}

HelloWorld -ConfigurationData /path/to/Data.psd1

Please note that the reference to the data file is done inside the configuration file.


built_in_resource [windows]

This generic method defines if service should run or be stopped

dsc(tag).built_in_resource(scriptBlock, resourceName)
  • resourceName: resourceName

  • scriptBlock: Desired state for the resource

Apply a given DSC resource to the node.

Parameters
  • tag parameter is purely informative and has no impact on the resource.

  • ResourceName must be the explicit name of the DSC resource you wish to apply

  • ScriptBlock must be a powershell script in plain text, returning an Hashtable containing the parameters to pass to the resource.

Note that this method can only apply built-in Windows resources. It will not be able to apply an external resource.

Example

If we want to apply a Registry resource. The resourceName used will be Registry And a potential ScriptBlock could be:

 $HKLM_SOFT="HKEY_LOCAL_MACHINE\SOFTWARE"
 $Ensure      = "Present"
 $Key         = $HKLM_SOFT + "\ExampleKey"

 $table = @{}
 $table.Add("Ensure", $Ensure)
 $table.Add("Key", $Key)
 $table.Add("ValueName", "RudderTest")
 $table.Add("ValueData", "TestData")
 $table

Note that all the ScriptBlock will be readable on the Rudder logs or in the policy files.


dsc_mof_file

resource dsc_mof_file(MOFFile)
  • MOFFile: Path to the mof that need to be applied

States

apply [windows]

Ensure that all MOF files under MOFFile are applied via DSC.

dsc_mof_file(MOFFile).apply()

Ensure that all MOF files contained under the target folder are applied via DSC on the target node.


environment

resource environment(name)
  • name: Name of the environment variable

States

variable_present [unix]

Enforce an environment variable value.

environment(name).variable_present(value)
  • value: Value of the environment variable

Force the value of a shell environment variable. The variable will be written in /etc/environment. A newly created environment variable will not be usable by the agent until it is restarted.


file

resource file(path)
  • path: File name (absolute path on the target node)

States

Create a symlink at a destination path and pointing to a source target. This is also possible to enforce its creation

file(path).symlink_present_option(source, enforce)
  • enforce: Force symlink if file already exist (true or false)

  • source: Source file (absolute path on the target node)


Create a symlink at a destination path and pointing to a source target even if a file or directory already exists.

file(path).symlink_present_force(source)
  • source: Source file (absolute path on the target node)


Create a symlink at a destination path and pointing to a source target except if a file or directory already exists.

file(path).symlink_present(source)
  • source: Source file (absolute path on the target node)


report_content_tail [unix]

Report the tail of a file

file(path).report_content_tail(limit)
  • limit: Number of lines to report (default is 10)

    • can be empty, must match: ^\d*$

Report the tail of a file.

This method does nothing on the system, but only reports a partial content from a given file. This allows centralizing this information on the server, and avoid having to connect on each node to get this information.

Note
This method only works in "Full Compliance" reporting mode.
Parameters

Target

This is the file you want to report content from. The method will return an error if it does not exist.

Limit

The number of line to report.

Examples
# To get the 3 first line of /etc/hosts
file_report_content("/etc/hosts", "3");

report_content_head [unix]

Report the head of a file

file(path).report_content_head(limit)
  • limit: Number of lines to report (default is 10)

    • can be empty, must match: ^\d*$

Report the head of a file.

This method does nothing on the system, but only reports a partial content from a given file. This allows centralizing this information on the server, and avoid having to connect on each node to get this information.

Note
This method only works in "Full Compliance" reporting mode.
Parameters

Target

This is the file you want to report content from. The method will return an error if it does not exist.

Limit

The number of line to report.

Examples
# To get the 3 first line of /etc/hosts
file_report_content("/etc/hosts", "3");

report_content [unix]

Report the content of a file

file(path).report_content(regex, context)
  • context: Number of context lines when matching regex (default is 0)

    • can be empty, must match: ^\d*$

  • regex: Regex to search in the file (empty for whole file)

    • can be empty

Report the content of a file.

This method does nothing on the system, but only reports a complete or partial content from a given file. This allows centralizing this information on the server, and avoid having to connect on each node to get this information.

Note
This method only works in "Full Compliance" reporting mode.
Parameters

Target

This is the file you want to report content from. The method will return an error if it does not exist.

Regex

If empty, the method will report the whole file content. If set, the method will grep the file for the given regular expression, and report the result.

Context

When specifying a regex, will add the number of lines of context around matches (default is 0, i.e. no context).

When reporting the whole file, this parameter is ignored.

Examples
# To get the whole /etc/hosts content
file_report_content("/etc/hosts", "", "");
# To get lines starting by "nameserver" in /etc/resolv.conf
file_report_content("/etc/resolv.conf", "^nameserver", "");
# To get lines containing "rudder" from /etc/hosts with 3 lines of context
file_report_content("/etc/hosts", "rudder", "3");

replace_lines [unix]

Ensure that a line in a file is replaced by another one

file(path).replace_lines(line, replacement)
  • line: Line to match in the file

  • replacement: Line to add in the file as a replacement

You can replace lines in a files, based on regular expression and captured pattern

Syntax

The content to match in the file is a PCRE regular expression, unanchored that you can replace with the content of replacement.

Content can be captured in regular expression, and be reused with the notation ${match.1} (for first matched content), ${match.2} for second, etc, and the special captured group ${match.0} for the whole text.

Example

Here is an example to remove enclosing specific tags

file_replace_lines("/PATH_TO_MY_FILE/file", "<my>(.*)<pattern>", "my ${match.1} pattern")

present [windows, unix]

Create a file if it doesn’t exist

file(path).present()

lines_present [windows, unix]

Ensure that one or more lines are present in a file

file(path).lines_present(lines)
  • lines: Line(s) to add in the file


lines_absent [windows, unix]

Ensure that a line is absent in a specific location

file(path).lines_absent(lines)
  • lines: Line(s) to remove in the file


line_present_in_xml_tag [unix]

Ensure that a line is present in a tag in a specific location. The objective of this method is to handle XML-style files. Note that if the tag is not present in the file, it won’t be added, and the edition will fail.

file(path).line_present_in_xml_tag(tag, line)
  • line: Line to ensure is present inside the section

  • tag: Name of the XML tag under which lines should be added (not including the <> brackets)


line_present_in_ini_section [unix]

Ensure that a line is present in a section in a specific location. The objective of this method is to handle INI-style files.

file(path).line_present_in_ini_section(section, line)
  • line: Line to ensure is present inside the section

  • section: Name of the INI-style section under which lines should be added (not including the [] brackets)


keys_values_present [unix]

Ensure that the file contains all pairs of "key separator value", with arbitrary separator between each key and its value

file(path).keys_values_present(keys, separator)
  • keys: Name of the dict structure (without "$\{}") containing the keys (keys of the dict), and values to define (values of the dict)

  • separator: Separator between key and value, for example "=" or " " (without the quotes)

    • can contain only white-space chars

This method ensures key-value pairs are present in a file.

Usage

This method will iterate over the key-value pairs in the dict, and:

  • If the key is not defined in the destination, add the key
    separator + value line.

  • If the key is already present in the file, replace the key
    separator + anything by key + separator + value

This method always ignores spaces and tabs when replacing (which means for example that key = value will match the = separator).

Keys are considered unique (to allow replacing the value), so you should use file_ensure_lines_present if you want to have multiple lines with the same key.

Example

If you have an initial file (/etc/myfile.conf) containing:

key1 = something
key3 = value3

To define key-value pairs, use the variable_dict or variable_dict_from_file methods.

For example, if you use the following content (stored in /tmp/data.json):

{
   "key1": "value1",
   "key2": "value2"
}

With the following policy:

# Define the `content` variable in the `configuration` prefix from the json file
variable_dict_from_file("configuration", "content", "/tmp/data.json")
# Enforce the presence of the key-value pairs
file_ensure_keys_values("/etc/myfile.conf", "configuration.content", " = ")

The destination file (/etc/myfile.conf) will contain:

key1 = value1
key3 = value3
key2 = value2

key_value_present_option [unix]

Ensure that the file contains a pair of "key separator value", with options on the spacing around the separator

file(path).key_value_present_option(key, value, separator, option)
  • key: Key to define

  • option: Option for the spacing around the separator: strict, which prevent spacing (space or tabs) around separators, or lax which accepts any number of spaces around separators

    • strict, lax

  • separator: Separator between key and value, for example "=" or " " (without the quotes)

    • can contain only white-space chars

  • value: Value to define

Edit (or create) the file, and ensure it contains an entry key → value with arbitrary separator between the key and its value. If the key is already present, the method will change the value associated with this key.


key_value_present_in_ini_section [unix]

Ensure that a key-value pair is present in a section in a specific location. The objective of this method is to handle INI-style files.

file(path).key_value_present_in_ini_section(section, name, value)
  • name: Name of the key to add or edit

  • section: Name of the INI-style section under which the line should be added or modified (not including the [] brackets)

  • value: Value of the key to add or edit


key_value_present [unix]

Ensure that the file contains a pair of "key separator value"

file(path).key_value_present(key, value, separator)
  • key: Key to define

  • separator: Separator between key and value, for example "=" or " " (without the quotes)

    • can contain only white-space chars

  • value: Value to define

Edit (or create) the file, and ensure it contains an entry key → value with arbitrary separator between the key and its value. If the key is already present, the method will change the value associated with this key.


key_value_parameter_present_in_list [unix]

Ensure that one parameter exists in a list of parameters, on one single line, in the right hand side of a key→values line

file(path).key_value_parameter_present_in_list(key, key_value_separator, parameter, parameter_separator, leading_char_separator, closing_char_separator)
  • closing_char_separator: closing character of the parameters

    • can be empty

  • key: Full key name

  • key_value_separator: character used to separate key and value in a key-value line

    • can contain only white-space chars

  • leading_char_separator: leading character of the parameters

    • can be empty

  • parameter: String representing the sub-value to ensure is present in the list of parameters that form the value part of that line

  • parameter_separator: Character used to separate parameters in the list

    • can contain only white-space chars

Edit the file, and ensure it contains the defined parameter in the list of values on the right hand side of a key→values line. If the parameter is not there, it will be added at the end, separated by parameter_separator. Optionnaly, you can define leading and closing character to enclose the parameters If the key does not exist in the file, it will be added in the file, along with the parameter

Example

If you have an initial file (/etc/default/grub) containing

GRUB_CMDLINE_XEN="dom0_mem=16G"

To add parameter dom0_max_vcpus=32 in the right hand side of the line, you’ll need the following policy

file_ensure_key_value_parameter_in_list("/etc/default/grub", "GRUB_CMDLINE", "=", "dom0_max_vcpus=32", " ", "\"", "\"");

key_value_parameter_absent_in_list [unix]

Ensure that a parameter doesn’t exist in a list of parameters, on one single line, in the right hand side of a key→values line

file(path).key_value_parameter_absent_in_list(key, key_value_separator, parameter_regex, parameter_separator, leading_char_separator, closing_char_separator)
  • closing_char_separator: closing character of the parameters

    • can be empty

  • key: Full key name

  • key_value_separator: character used to separate key and value in a key-value line

    • can contain only white-space chars

  • leading_char_separator: leading character of the parameters

    • can be empty

  • parameter_regex: Regular expression matching the sub-value to ensure is not present in the list of parameters that form the value part of that line

  • parameter_separator: Character used to separate parameters in the list

    • can contain only white-space chars

Edit the file, and ensure it does not contain the defined parameter in the list of values on the right hand side of a key→values line. If the parameter is there, it will be removed. Please note that the parameter can be a regular expression. It will also remove any whitespace character between the parameter and parameter_separator Optionally, you can define leading and closing character to enclose the parameters

Example

If you have an initial file (/etc/default/grub) containing

GRUB_CMDLINE_XEN="dom0_mem=16G dom0_max_vcpus=32"

To remove parameter dom0_max_vcpus=32 in the right hand side of the line, you’ll need the following policy

file_ensure_key_value_parameter_not_in_list("/etc/default/grub", "GRUB_CMDLINE", "=", "dom0_max_vcpus=32", " ", "\"", "\"");

from_template_type [unix]

Build a file from a template

file(path).from_template_type(source_template, template_type)
  • source_template: Source file containing a template to be expanded (absolute path on the target node)

  • template_type: Template type (cfengine, jinja2 or mustache)

These methods write a file based on a provided template and the data available to the agent.

Usage

To use these methods (file_from_template_*), you need to have:

  • a template file

  • data to fill this template

The template file should be somewhere on the local file system, so if you want to use a file shared from the policy server, you need to copy it first (using file_copy_from_remote_source).

It is common to use a specific folder to store those templates after copy, for example in ${sys.workdir}/tmp/templates/.

The data that will be used while expanding the template is the data available in the agent at the time of expansion. That means:

  • Agent’s system variables (${sys.*}, …​) and conditions (linux, …​)

  • data defined during execution (result conditions of generic methods, …​)

  • conditions based on condition_ generic methods

  • data defined in ncf using variable_* generic methods, which allow for example to load data from local json or yaml files.

Template types

ncf currently supports three templating languages:

  • mustache templates, which are documented in file_from_template_mustache

  • jinja2 templates, which are documented in file_from_template_jinja2

  • CFEngine templates, which are a legacy implementation that is here for compatibility, and should not be used for new templates.

Example

Here is a complete example of templating usage:

The (basic) template file, present on the server in /PATH_TO_MY_FILE/ntp.conf.mustache (for syntax reference, see file_from_template_mustache):

{{#classes.linux}}
server {{{vars.configuration.ntp.hostname}}}
{{/classes.linux}}
{{^classes.linux}}
server hardcoded.server.example
{{/classes.linux}}

And on your local node in /tmp/ntp.json, the following json file:

{ "hostname": "my.hostname.example" }

And the following policy:

# Copy the file from the policy server
file_copy_from_remote_source("/PATH_TO_MY_FILE/ntp.conf.mustache", "${sys.workdir}/tmp/templates/ntp.conf.mustache")
# Define the `ntp` variable in the `configuration` prefix from the json file
variable_dict_from_file("configuration", "ntp", "/tmp/ntp.json")
# Expand yout template
file_from_template_type("${sys.workdir}/tmp/templates/ntp.conf.mustache", "/etc/ntp.conf", "mustache")
# or
# file_from_template_mustache("${sys.workdir}/tmp/templates/ntp.conf.mustache", "/etc/ntp.conf")

The destination file will contain the expanded content, for example on a Linux node:

server my.hostname.example

from_template_mustache [windows, unix]

Build a file from a mustache template

file(path).from_template_mustache(source_template)
  • source_template: Source file containing a template to be expanded (absolute path on the target node)

See file_from_template_type for general documentation about templates usage.

Syntax

Mustache is a logic-less templating language, available in a lot of languages, and used for file templating in Rudder. The mustache syntax reference is https://mustache.github.io/mustache.5.html. The Windows implementation follows the standard, the Unix one is a bit richer as describe below.

We will here describe the way to get agent data into a template. Ass explained in the general templating documentation, we can access various data in a mustache template.

The main specificity compared to standard mustache syntax of prefixes in all expanded values:

  • classes to access conditions

  • vars to access all variables

Classes

Here is how to display content depending on conditions definition:

{{#classes.my_condition}}
   content when my_condition is defined
{{/classes.my_condition}}

{{^classes.my_condition}}
   content when my_condition is *not* defined
{{/classes.my_condition}}

Note: You cannot use condition expressions here.

Scalar variable

Here is how to display a scalar variable value (integer, string, …​), if you have defined variable_string("variable_prefix", "my_variable", "my_value"):

{{{vars.variable_prefix.my_variable}}}

We use the triple {{{ }}} to avoid escaping html entities.

Iteration

Iteration is done using a syntax similar to scalar variables, but applied on container variables.

  • Use {{#vars.container}} content {{/vars.container}} to iterate

  • Use {{{.}}} for the current element value in iteration

  • Use {{{key}}} for the key value in current element

  • Use {{{.key}}} for the key value in current element (Linux only)

  • Use {{{@}}} for the current element key in iteration (Linux only)

To iterate over a list, for example defined with:

variable_iterator("variable_prefix", "iterator_name", "a,b,c", ",")

Use the following file:

{{#vars.variable_prefix.iterator_name}}
{{{.}}} is the current iterator_name value
{{/vars.variable_prefix.iterator_name}}

Which will be expanded as:

a is the current iterator_name value
b is the current iterator_name value
c is the current iterator_name value

To iterate over a container defined by the following json file, loaded with variable_dict_from_file("variable_prefix", "dict_name", "path"):

{
   "hosts": [
       "host1",
       "host2"
   ],
   "files": [
       {"name": "file1", "path": "/path1", "users": [ "user1", "user11" ] },
       {"name": "file2", "path": "/path2", "users": [ "user2" ] }
   ],
   "properties": {
       "prop1": "value1",
       "prop2": "value2"
   }
}

Use the following template:

{{#vars.variable_prefix.dict_name.hosts}}
{{{.}}} is the current hosts value
{{/vars.variable_prefix.dict_name.hosts}}

# will display the name and path of the current file
{{#vars.variable_prefix.dict_name.files}}
{{{name}}}: {{{path}}}
{{/vars.variable_prefix.dict_name.files}}
# Lines below will only be properly rendered in unix Nodes
# will display the users list of each file
{{#vars.variable_prefix.dict_name.files}}
{{{name}}}:{{#users}} {{{.}}}{{/users}}
{{/vars.variable_prefix.dict_name.files}}


# will display the current properties key/value pair
{{#vars.variable_prefix.dict_name.properties}}
{{{@}}} -> {{{.}}}
{{/vars.variable_prefix.dict_name.properties}}

Which will be expanded as:

host1 is the current hosts value
host2 is the current hosts value

# will display the name and path of the current file
file1: /path1
file2: /path2

# Lines below will only be properly rendered in unix Nodes
# will display the users list of each file
file1: user1 user11
file2: user2

# will display the current properties key/value pair
prop1 -> value1
prop2 -> value2

Note: You can use {{#-top-}} …​ {{/-top-}} to iterate over the top level container.

System variables

Some sys dict variables (like sys.ipv4) are also accessible as string, for example:

  • ${sys.ipv4} gives 54.32.12.4

  • $[sys.ipv4[ethO]} gives 54.32.12.4

  • $[sys.ipv4[eth1]} gives 10.45.3.2

These variables are not accessible as dict in the templating data, but are represented as string:

  • ipv4 is a string variable in the sys dict with value 54.32.12.4

  • ipv4[ethO] is a string variable in the sys dict with value 54.32.12.4

  • ipv4 is not accessible as a dict in the template

To access these value, use the following syntax in your mustache templates:

{{{vars.sys.ipv4[eth0]}}}

from_template_jinja2 [unix]

Build a file from a jinja2 template

file(path).from_template_jinja2(source_template)
  • source_template: Source file containing a template to be expanded (absolute path on the target node)

See file_from_template_type for general documentation about templates usage.

This generic method will build a file from a jinja2 template using data (conditions and variables) found in the execution context.

Setup

It requires to have the jinja2 python module installed on the node, it can usually be done in ncf with package_present("python-jinja2", "", "", "").

Warning
If you are using a jinja2 version older than 2.7 trailing newlines will not be preserved in the destination file.
Syntax

Jinja2 is a powerful templating language, running in Python. The Jinja2 syntax reference documentation is http://jinja.pocoo.org/docs/dev/templates/ which will likely be useful, as Jinja2 is very rich and allows a lot more that what is explained here.

This section presents some simple cases that cover what can be done with mustache templating, and the way the agent data is provided to the templating engine.

The main specificity of jinja2 templating is the use of two root containers:

  • classes to access currently defined conditions

  • vars to access all currently defined variables

Note: You can add comments in the template, that will not be rendered in the output file with {# …​ #}.

You can extend the Jinja2 templating engine by adding custom FILTERS and TESTS in the script /var/rudder/configuration-repository/ncf/10_ncf_internals/modules/extensions/jinja2_custom.py

For instance, to add a filter to uppercase a string and a test if a number is odd, you can create the file /var/rudder/configuration-repository/ncf/10_ncf_internals/modules/extensions/jinja2_custom.py on your Rudder server with the following content:

def uppercase(input):
    return input.upper()

def odd(value):
    return True if (value % 2) else False

FILTERS = {'uppercase': uppercase}
TESTS = {'odd': odd}

These filters and tests will be usable in your jinja2 templates automatically.

Conditions

To display content based on conditions definition:

{% if classes.my_condition is defined  %}
   display this if defined
{% endif %}
{% if not classes.my_condition is defined %}
   display this if not defined
{% endif %}

Note: You cannot use condition expressions here.

You can also use other tests, for example other built-in ones or those defined in jinja2_custom.py:

{% if vars.variable_prefix.my_number is odd  %}
   display if my_number is odd
{% endif %}

Scalar variables

Here is how to display a scalar variable value (integer, string, …​), if you have defined variable_string("variable_prefix", "my_variable", "my_value"):

{{ vars.variable_prefix.my_variable }}

You can also modify what is displayed by using filters. The built-in filters can be extended in jinja2_custom.py:

{{ vars.variable_prefix.my_variable | uppercase }}

Will display the variable in uppercase.

Iteration

To iterate over a list, for example defined with:

variable_iterator("variable_prefix", "iterator_name", "a,b,c", ",")

Use the following file:

{% for item in vars.variable_prefix.iterator_name %}
{{ item }} is the current iterator_name value
{% endfor %}

Which will be expanded as:

a is the current iterator_name value
b is the current iterator_name value
c is the current iterator_name value

To iterate over a container defined by the following json file, loaded with variable_dict_from_file("variable_prefix", "dict_name", "path"):

{
   "hosts": [
       "host1",
       "host2"
   ],
   "files": [
       {"name": "file1", "path": "/path1", "users": [ "user1", "user11" ] },
       {"name": "file2", "path": "/path2", "users": [ "user2" ] }
   ],
   "properties": {
       "prop1": "value1",
       "prop2": "value2"
   }
}

Use the following template:

{% for item in vars.variable_prefix.dict_name.hosts %}
{{ item }} is the current hosts value
{% endfor %}

# will display the name and path of the current file
{% for file in vars.variable_prefix.dict_name.files %}
{{ file.name }}: {{ file.path }}
{% endfor %}

# will display the users list of each file
{% for file in vars.variable_prefix.dict_name.files %}
{{ file.name }}: {{ file.users|join(' ') }}
{% endfor %}


# will display the current properties key/value pair
{% for key, value in vars.variable_prefix.dict_name.properties.items() %}
{{ key }} -> {{ value }}
{% endfor %}

Which will be expanded as:

host1 is the current hosts value
host2 is the current hosts value

# will display the name and path of the current file
file1: /path1
file2: /path2

# will display the users list of each file
file1: user1 user11
file2: user2

# will display the current properties key/value pair
prop1 -> value1
prop2 -> value2

System variables

Some sys dict variables (like sys.ipv4) are also accessible as string, for example:

  • ${sys.ipv4} gives 54.32.12.4

  • $[sys.ipv4[ethO]} gives 54.32.12.4

  • $[sys.ipv4[eth1]} gives 10.45.3.2

These variables are not accessible as dict in the templating data, but are represented as string:

  • ipv4 is a string variable in the sys dict with value 54.32.12.4

  • ipv4[ethO] is a string variable in the sys dict with value 54.32.12.4

  • ipv4 is not accessible as a dict in the template

To access these value, use the following syntax in your jinja2 templates:

vars.sys['ipv4[eth0]']

from_string_mustache [unix]

Build a file from a mustache string

file(path).from_string_mustache(template)
  • template: String containing a template to be expanded

Build a file from a mustache string. Complete mustache documentation is available in the file_from_template_mustache method documentation.


from_shared_folder [windows, unix]

Ensure that a file or directory is copied from the Rudder shared folder.

file(path).from_shared_folder(source, hash_type)
  • hash_type: Hash algorithm used to check if file is updated (sha256, sha512). Only used on Windows, ignored on Unix. default is sha256

    • empty, sha256, sha512, md5, sha1

  • source: Source file (path, relative to /var/rudder/configuration-repository/shared-files)

Ensure that a file or directory is copied from the Rudder shared folder. The Rudder shared folder is located on the Rudder server under /var/rudder/configuration-repository/shared-files. Every file/folder in the shared folder will be available for every managed node. This method will download and update the destination file from a source taken from this shared folder. A file in the shared folder will be updated on the node side at agent run.


from_remote_template [unix]

Build a file from a template on the Rudder server

file(path).from_remote_template(source_template, template_type)
  • source_template: Source file containing a template to be expanded (absolute path on the server)

  • template_type: Template type (jinja2 or mustache)

    • jinja2, mustache

Write a file based on a template on the Rudder server and data available on the node

Usage

To use this method, you need to have:

  • a template on the Rudder server shared folder

  • data to fill this template

The template needs to be located in the shared-files folder and can be accessed with:

/var/rudder/configuration-repository/shared-files/PATH_TO_YOUR_FILE

The data that will be used while expanding the template is the data available in the agent at the time of expansion. That means:

  • Agent’s system variables (${sys.*}, …​) and conditions (linux, …​)

  • data defined during execution (result conditions of generic methods, …​)

  • conditions based on condition_ generic methods

  • data defined using variable_* generic methods, which allow for example to load data from local json or yaml files.

Template types

Supported templating languages:

Reporting

This method will provide extra log_warning message if the template was not updated, but the destination file is modified.


from_remote_source_recursion [unix]

Ensure that a file or directory is copied from a policy server

file(path).from_remote_source_recursion(source, recursion)
  • recursion: Recursion depth to enforce for this path (0, 1, 2, …​, inf)

  • source: Source file (absolute path on the policy server)

This method requires that the policy server is configured to accept copy of the source file or directory from the agents it will be applied to.

You can download a file from the shared files with:

/var/rudder/configuration-repository/shared-files/PATH_TO_YOUR_DIRECTORY_OR_FILE

from_remote_source [unix]

Ensure that a file or directory is copied from a policy server

file(path).from_remote_source(source)
  • source: Source file (absolute path on the policy server)

Note: This method uses the agent native file copy protocol, and can only download files from the policy server. To download a file from an external source, you can use HTTP with the file_download method.

This method requires that the policy server is configured to accept copy of the source file from the agents it will be applied to.

You can download a file from the shared files with:

/var/rudder/configuration-repository/shared-files/PATH_TO_YOUR_FILE

from_local_source_with_check [unix]

Ensure that a file or directory is copied from a local source if a check command succeeds

file(path).from_local_source_with_check(source, check_command, rc_ok)
  • check_command: Command to run, it will get the source path as argument

  • rc_ok: Return codes to be considered as valid, separated by a comma (default is 0)

    • can be empty

  • source: Source file (absolute path on the target node)

This method is a conditional file copy.

It allows comparing the source and destination, and if they are different, call a command with the source file path as argument, and only update the destination if the commands succeeds (i.e. returns a code included in rc_ok).

Examples
# To copy a configuration file only if it passes a config test:
file_from_local_source_with_check("/tmp/program.conf", "/etc/program.conf", "program --config-test", "0");

This will:

  • Compare /tmp/program.conf and /etc/program.conf, and return kept if files are the same

  • If not, it will execute program --config-test "/tmp/program.conf" and check the return code

  • If it is one of the rc_ok codes, it will copy /tmp/program.conf into /etc/program.conf and return a repaired

  • If not, it will return an error


from_local_source_recursion [unix]

Ensure that a file or directory is copied from a local source

file(path).from_local_source_recursion(source, recursion)
  • recursion: Recursion depth to enforce for this path (0, 1, 2, …​, inf)

  • source: Source file (absolute path on the target node)

Ensure that a file or directory is copied from a local source. If the source is a directory, you can force a maximum level of copy recursion.

  • 0 being no recursion, which will only create an empty folder

  • inf being a complete recursive copy of the folder

  • 1,2,3,…​ will force the maximal level of recursion to copy


from_local_source [windows, unix]

Ensure that a file or directory is copied from a local source

file(path).from_local_source(source)
  • source: Source file (absolute path on the target node)

Ensure that a file or directory is copied from a local source on and from the target node. The copy is not recursive if the target is a directory. To copy recursively a folder from a local source, use the File from local source recursion method.


from_http_server [windows, unix]

Download a file if it does not exist, using curl with a fallback on wget

file(path).from_http_server(source)
  • source: URL to download from

This method finds a HTTP command-line tool and downloads the given source into the destination if it does not exist yet.

This method will NOT update the file after the first download until its removal.

On Linux based nodes it will tries curl first and fallback with wget if needed. On Windows based nodes, only curl will be used.


content [windows, unix]

Enforce the content of a file

file(path).content(lines, enforce)
  • enforce: Enforce the file to contain only line(s) defined (true or false)

  • lines: Line(s) to add in the file - if lines is a list, please use @{lines} to pass the iterator rather than iterating over each values

Enforce the content of a file. The enforce parameter changes the edition method:

  • If enforce is set to true the file content will be forced

  • If enforce is set to false the file content will be forced line by line. Which means that each line managed can not be duplicated and the order will not be guaranteed.

In most cases, the enforce parameter should be set to true. When enforce is set to false, and the managed lines are:

Bob
Alice
Charly

Will be compliant with the following file contents:

Bob
Alice
Charly
Charly
Bob
Alice
Charly
Bob
Charly
Alice

check_symlinkto [unix]

Checks if first file is symlink to second file

file(path).check_symlinkto(target)
  • target: Target file (absolute path on the target node)

This bundle will define a condition file_check_symlinkto_${target}_{ok, reached, kept} if the file ${path} is a symbolic link to ${target}, or file_check_symlinkto_${target}_{not_ok, reached, not_kept, failed} if if it is not a symbolic link, or any of the files does not exist. The symlink’s path is resolved to the absolute path and checked against the target file’s path, which must also be an absolute path.


Checks if a file exists and is a symlink

file(path).check_symlink()

This bundle will define a condition file_check_symlink_${path}_{ok, reached, kept} if the file is a symlink, or file_check_symlink_${path}_{not_ok, reached, not_kept, failed} if the file is not a symlink or does not exist


check_socket [unix]

Checks if a file exists and is a socket

file(path).check_socket()

This bundle will define a condition file_check_socket_${path}_{ok, reached, kept} if the file is a socket, or file_check_socket_${path}_{not_ok, reached, not_kept, failed} if the file is not a socket or does not exist


check_regular [unix]

Checks if a file exists and is a regular file

file(path).check_regular()

This bundle will define a condition file_check_regular_${path}_{ok, reached, kept} if the file is a regular_file, or file_check_regular_${path}_{not_ok, reached, not_kept, failed} if the file is not a regular file or does not exist


Checks if two files are the same (hard links)

file(path).check_hardlink(path_2)
  • path_2: File name #2 (absolute path on the target node)

This bundle will define a condition file_check_hardlink_${path}_{ok, reached, kept} if the two files ${path} and ${path_2} are hard links of each other, or file_check_hardlink_${path}_{not_ok, reached, not_kept, failed} if if the files are not hard links.


check_exists [unix]

Checks if a file exists

file(path).check_exists()

This bundle will define a condition file_check_exists_${path}_{ok, reached, kept} if the file exists, or file_check_exists_${path}_{not_ok, reached, not_kept, failed} if the file doesn’t exists


check_character_device [unix]

Checks if a file exists and is a character device

file(path).check_character_device()

This bundle will define a condition file_check_character_device_${path}_{ok, reached, kept} if the file is a character device, or file_check_character_device_${path}_{not_ok, reached, not_kept, failed} if the file is not a character device or does not exist


check_block_device [unix]

Checks if a file exists and is a block device

file(path).check_block_device()

This bundle will define a condition file_check_block_device_${path}_{ok, reached, kept} if the file is a block_device, or file_check_block_device_${path}_{not_ok, reached, not_kept, failed} if the file is not a block device or does not exist


check_FIFO_pipe [unix]

Checks if a file exists and is a FIFO/Pipe

file(path).check_FIFO_pipe()

This bundle will define a condition file_check_FIFO_pipe_${path}_{ok, reached, kept} if the file is a FIFO, or file_check_FIFO_pipe_${path}_{not_ok, reached, not_kept, failed} if the file is not a fifo or does not exist


block_present_in_section [unix]

Ensure that a section contains exactly a text block

file(path).block_present_in_section(section_start, section_end, block)
  • block: Block representing the content of the section

  • section_end: End of the section

  • section_start: Start of the section

Ensure that a section contains exactly a text block. A section is delimited by a header and a footer.

  • If the section exists, its content will be replaced if needed

  • Otherwise it will be created at the end of the file


block_present [unix]

Ensure that a text block is present in a specific location

file(path).block_present(block)
  • block: Block(s) to add in the file

Ensure that a text block is present in the target file. If the block is not found, it will be added at the end of the file.

Examples:

Given a file with the following content:

apple
pear
banana

Applying the method with the block:

pear
orange

Will result in the following content:

apple
pear
banana
pear
orange

augeas_set [unix]

Use augeas commands and options to set a node label’s value.

file(path).augeas_set(value, lens, file)
  • file: Load a specific file (optional)

    • can be empty

  • lens: Load a specific lens (optional)

    • can be empty

  • value: The value to set

Augeas is a tool that provides an abstraction layer for all the complexities that turn around editing files with regular expressions. It’s a tree based hierarchy tool, that handles system configuration files where you can securely modify your files and to do so you have to provide the path to the node label’s value.

Augeas uses lenses which are like sort of modules that are in charge of identifying and converting files into tree and back.

This method uses augtool to force the value of an augeas node’s label.

Actually there are two ways to use this method:

  • Either by providing the augeas path to the node’s label and let lens and file empty. ** this way augeas will load the common files and lens automatically

  • Or by using a given file path and a specific lens. better performances since only one lens is loaded support custom lens, custom paths (for instance to apply the Hosts lens to another file than /etc/hosts)

  • Either by simply providing an augeas path to the node’s label

Warning: When you don’t specify the file and lens to use, no backup of the file will be made before editing it.

Two uses cases examples:

In the first case, let’s suppose that you want to set the value of the ip address of the first line in the /etc/hosts file to 192.168.1.5, to do so you need to provide the augeas path and value parameters.

file_augeas_set("/etc/hosts/1/ipaddr", "192.168.1.5", "", "");

The second case is more efficient, and forces the Hosts lens to parse the /etc/hosts file and set the value for the given path node:

file_augeas_set("/etc/hosts/1/ipaddr", "192.168.1.5", "Hosts", "/etc/hosts");

augeas_commands [unix]

Use Augeas binaries to execute augtool commands and options directly on the agent.

file(path).augeas_commands(variable_prefix, commands, autoload)
  • autoload: Deactivate the autoload option if you don’t want augeas to load all the files/lens, it’s true by default.

    • empty, true, false

  • commands: The augeas command(s)

  • variable_prefix: The prefix of the variable name

Augeas is a tool that provides an abstraction layer for all the complexities that turn around editing files with regular expressions.

This method defines a rudder variable from the output of a augtool command. The method has in total 4 parameters:

  • variable_prefix: target variable prefix

  • variable_name: target variable name

  • commands: augtool script to run

  • autoload: boolean to load or not the common augeas lens, default to true

Augtool provides bunch of other commands and options that you can use in this generic method such as match to print the matches for a specific path expression, span to print position in input file corresponding to tree, retrieve to transform tree into text and save to save all pending changes. If Augeas isn’t installed on the agent, it will produces an error.

This method will execute the commands via augtool. The particular thing you may want to do with this method is using it depending on you needs and in two cases.

With autoload

Augeas will accordingly load all files and lenses before executing the commands you have specified since autoload is active.

file_augeas_commands("label","value","print /files/etc/hosts/*/ipaddr[../canonical="server.rudder.local"]","")
# The variable label.value will be defined as such:
${label.value} -> /files/etc/hosts/2/ipaddr = "192.168.2.2"
file_augeas_commands("label","value","ls files/etc/ \n print /files/etc/ssh/sshd_config","true")
# Will define the variable label.value with the list of files availables in /etc and already parsable with augeas,
# followed by the dump of the sshd_config file, parsed by augeas.
Without autoload

The second case is when you deactivate that option which means that you are specifying autoload to false and in this case you have to load manually your files and lenses in the commands parameter by using the set augeas command. Below is a second example where the lens and file are explicitly set:

file_augeas_commands("label","value","set /augeas/load/Sshd/lens "Sshd.lns \n set /augeas/load/Sshd/incl "/etc/ssh/sshd_config" \n load \n print /augeas/load/Sshd \n print /augeas/load/Sshd \n print /files/etc/ssh/sshd_config","false")

absent [windows, unix]

Remove a file if it exists

file(path).absent()

template_expand [unix] - DEPRECATED

This is a bundle to expand a template in a specific location

file(path).template_expand(tml_file, mode, owner, group)
  • group: Group of destination file

  • mode: Mode of destination file

  • owner: Owner of destination file

  • tml_file: File name (with full path within the framework) of the template file


from_template [windows, unix] - DEPRECATED

Build a file from a legacy CFEngine template

file(path).from_template(source_template)
  • source_template: Source file containing a template to be expanded (absolute path on the target node)

See file_from_template_type for general documentation about templates usage.


group

resource group(name)
  • name: Group name

States

present [unix]

Create a group

group(name).present()

absent [unix]

Make sure a group is absent

group(name).absent()

http_request

resource http_request(method, url)
  • method: Method to call the URL (POST, PUT)

  • url: URL to send content to

States

content_headers [unix]

Make an HTTP request with a specific header

http_request(method, url).content_headers(content, headers)
  • content: Content to send

  • headers: Headers to include in the HTTP request

    • can be empty

Perform a HTTP request on the URL, method and headers provided and send the content provided. Will return an error if the request failed.


check_status_headers [unix]

Checks status of an HTTP URL

http_request(method, url).check_status_headers(expected_status, headers)
  • expected_status: Expected status code of the HTTP response

  • headers: Headers to include in the HTTP request (as a string, without ')

    • can be empty

Perform a HTTP request on the URL, method and headers provided and check that the response has the expected status code (ie 200, 404, 503, etc)


kernel_module

resource kernel_module(name)
  • name: Complete name of the kernel module, as seen by lsmod or listed in /proc/modules

States

not_loaded [unix]

Ensure that a given kernel module is not loaded on the system

kernel_module(name).not_loaded()

Ensure that a given kernel module is not loaded on the system. If the module is loaded, it will try to unload it using modprobe.


loaded [unix]

Ensure that a given kernel module is loaded on the system

kernel_module(name).loaded()

Ensure that a given kernel module is loaded on the system. If the module is not loaded, it will try to load it via modprobe.


enabled_at_boot [unix]

Ensure that a given kernel module will be loaded at system boot

kernel_module(name).enabled_at_boot()

Ensure that a given kernel module is enabled at boot on the system. This method only works on systemd systems. Rudder will look for a line matching the module name in a given section in the file:

  • /etc/modules-load.d/enabled_by_rudder.conf on systemd systems

If the module is already enabled by a different option file than used by Rudder, it will add an entry in the file managed by Rudder listed above, and leave intact the already present one. The modifications are persistent and made line per line, meaning that this Generic Method will never remove lines in the configuration file but only add it if needed.

Please note that this method will not load the module nor configure it, it will only enable its loading at system boot. If you want to force the module to be loaded, use instead the method kernel_module_loaded. If you want to configure the module, use instead the method kernel_module_configuration.


configuration [unix]

Ensure that the modprobe configuration of a given kernel module is correct

kernel_module(name).configuration(configuration)
  • configuration: Complete configuration block to put in /etc/modprobe.d/

    • must match: ^(alias|blacklist|install|options|remove|softdeps) +.*$

Ensure that the modprobe configuration of a given kernel module is correct. Rudder will search for the module configuration in a per-module dedicated section in /etc/modprobe.d/managed_by_rudder.conf.

  • If the module configuration is not found or incorrect, Rudder will (re-)create its configuration.

  • If the module is configured but with a different option file than used by Rudder, it will add the expected one in /etc/modprobe.d/managed_by_rudder.conf but will leave intact the already present one.

The configuration syntax must respect the one used by /etc/modprobe.d defined in the modprobe.d manual page.

  # To pass a parameter to a module:
  options module_name parameter_name=parameter_value
  # To blacklist a module
  blacklist modulename
  # etc...
Notes:

If you want to force the module to be loaded at boot, use instead the method kernel_module_enabled_at_boot which uses other Rudder dedicated files.

Example:

To pass options to a broadcom module

  • name = b43

  • configuration = options b43 nohwcrypt=1 qos=0

Will produce the resulting block in /etc/modprobe.d/managed_by_rudder.conf:

### b43 start section
options b43 nohwcrypt=1 qos=0
### b43 end section

monitoring_parameter

resource monitoring_parameter(key)
  • key: Name of the parameter

States

present [unix]

Add a monitoring parameter to a node (requires a monitoring plugin)

monitoring_parameter(key).present(value)
  • value: Value of the parameter

This method adds monitoring parameters to rudder nodes. The monitoring parameters are used to pass configuration to the monitoring plugins running with Rudder. Expected keys and parameters are specific to each plugin and can be found in their respective documentation.


monitoring_template

resource monitoring_template(template)
  • template: Name of the monitoring template

States

present [unix]

Add a monitoring template to a node (requires a monitoring plugin)

monitoring_template(template).present()

This method assigns monitoring templates to a Rudder node. The Rudder plugin respective to each monitoring platform will apply those templates to the node.


package

resource package(name)
  • name: Name of the package to verify

States

state_windows [windows]

This method manage packages using a chocolatey on the system.

package(name).state_windows(Status, Provider, Params, Version, Source, ProviderParams, AutoUpgrade)
  • AutoUpgrade: autoUpgrade, default to false

    • empty, true, false

  • Params: params to pass to the package installation

    • can be empty

  • Provider: default to choco

    • empty, choco

  • ProviderParams: provider parameters, default to choco

    • can be empty

  • Source: source

    • can be empty

  • Status: 'present' or 'absent', defaults to 'present'

    • empty, present, absent

  • Version: version, default to latest

    • can be empty

Install a windows package using a given provider

Parameters

Required args:

  • PackageName Name of target package

  • Status can be "present" or "absent"

Optional args:

  • Provider Provider used to installed the package

  • Params Package parameters, passed to the installer

  • Version can be "any", "latest" or any exact specific version number

  • Source "any" or any specific arch

  • ProviderParams provider specific options

  • AutoUpgrade default set to false

Providers

choco

The method is a simple transcription of the cchoco cChocoPaclageInstaller DSC resource, adapted to Rudder. The DSC module cchoco must be installed on your node before trying to use this method.

You can check the cchoco/chocolatey documentation to get more detailed information on the parameters. WARNING: If some exceptions are thrown about undefined env PATH variable after fresh cchoco lib in rudder, you may need to reboot your machine or notify your system that the env variables have been changed.


state_options [unix]

Enforce the state of a package with options

package(name).state_options(version, architecture, provider, state, options)
  • architecture: Architecture of the package, can be an architecture name or "default" (defaults to "default")

    • can be empty

  • options: Options no pass to the package manager (defaults to empty)

    • can be empty

  • provider: Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")

    • empty, default, yum, apt, zypper, zypper_pattern, slackpkg, pkg, ips, nimclient

  • state: State of the package, can be "present" or "absent" (defaults to "present")

    • empty, present, absent

  • version: Version of the package, can be "latest" for latest version or "any" for any version (defaults to "any")

    • can be empty

See package_state for documentation.


state [unix]

Enforce the state of a package

package(name).state(version, architecture, provider, state)
  • architecture: Architecture of the package, can be an architecture name or "default" (defaults to "default")

    • can be empty

  • provider: Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")

    • empty, default, yum, apt, zypper, zypper_pattern, slackpkg, pkg, ips, nimclient

  • state: State of the package, can be "present" or "absent" (defaults to "present")

    • empty, present, absent

  • version: Version of the package, can be "latest" for latest version or "any" for any version (defaults to "any")

    • can be empty

These methods manage packages using a package manager on the system.

package_present and package_absent use a new package implementation, different from package_install_*, package_remove_* and package_verify_*. It should be more reliable, and handle upgrades better. It is compatible though, and you can call generic methods from both implementations on the same host. The only drawback is that the agent will have to maintain double caches for package lists, which may cause a little unneeded overhead. These methods will update the corresponding package if updates are available New updates may not be detected even if there are some available, this is due to the update cache that is refresh every 4 hours by default, you can modify this behaviour called updates_cache_expire in global parameter

Package parameters

There is only one mandatory parameter, which is the package name to install. When it should be installed from a local package, you need to specify the full path to the package as name.

The version parameter allows specifying a version you want installed. It should be the complete versions string as used by the used package manager. This parameter allows two special values:

  • any which is the default value, and is satisfied by any version of the given package

  • latest which will ensure, at each run, that the package is at the latest available version.

The last parameter is the provider, which is documented in the next section.

You can use package_state_options to pass options to the underlying package manager (currently only with _apt package manager).

Package providers

This method supports several package managers. You can specify the package manager you want to use or let the method choose the default for the local system.

The package providers include a caching system for package information. The package lists (installed, available and available updates) are only updated when the cache expires, or when an operation is made by the agent on packages.

Note: The implementation of package operations is done in scripts called modules, which you can find in ${sys.workdir}/modules/packages/.

apt

This package provider uses apt/dpkg to manage packages on the system. dpkg will be used for all local actions, and apt is only needed to manage update and installation from a repository.

rpm

This package provider uses yum/rpm to manage packages on the system. rpm will be used for all local actions, and yum is only needed to manage update and installation from a repository.

It is able to downgrade packages when specifying an older version.

zypper

This package provider uses zypper/rpm to manage packages on the system. rpm will be used for all local actions, and zypper is only needed to manage update and installation from a repository.

Note: If the package version you want to install contains an epoch, you have to specify it in the version in the epoch:version form, like reported by zypper info.

zypper_pattern

This package provider uses zypper with the -t pattern option to manage zypper patterns or meta-packages on the system.

Since a zypper pattern can be named differently than the rpm package name providing it, please always use the exact pattern name (as listed in the output of zypper patterns) when using this provider.

Note: When installing a pattern from a local rpm file, Rudder assumes that the pattern is built following the official zypper documentation.

Older implementations of zypper patterns may not be supported by this module.

This provider doesn’t support installation from a file.

slackpkg

This package provider uses Slackware’s installpkg and upgradepkg tools to manage packages on the system

pkg

This package provider uses FreeBSD’s pkg to manage packages on the system. This provider doesn’t support installation from a file.

ips

This package provider uses Solaris’s pkg command to manage packages from IPS repositories on the system. This provider doesn’t support installation from a file.

nimclient

This package provider uses AIX’s nim client to manage packages from nim This provider doesn’t support installation from a file.

Examples
# To install postgresql in version 9.1 for x86_64 architecture
package_present("postgresql", "9.1", "x86_64", "");
# To ensure postgresql is always in the latest available version
package_present("postgresql", "latest", "", "");
# To ensure installing postgresql in any version
package_present("postgresql", "", "", "");
# To ensure installing postgresql in any version, forcing the yum provider
package_present("postgresql", "", "", "yum");
# To ensure installing postgresql from a local package
package_present("/tmp/postgresql-9.1-1.x86_64.rpm", "", "", "");
# To remove postgresql
package_absent("postgresql", "", "", "");

present [unix]

Enforce the presence of a package

package(name).present(version, architecture, provider)
  • architecture: Architecture of the package, can be an architecture name or "default" (defaults to "default")

    • can be empty

  • provider: Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")

    • empty, default, yum, apt, zypper, zypper_pattern, slackpkg, pkg, ips, nimclient

  • version: Version of the package, can be "latest" for latest version or "any" for any version (defaults to "any")

    • can be empty

See package_state for documentation.


check_installed [unix]

Verify if a package is installed in any version

package(name).check_installed()

This bundle will define a condition package_check_installed_${file_name}_{ok, reached, kept} if the package is installed, or package_check_installed_${file_name}_{not_ok, reached, not_kept, failed} if the package is not installed


absent [unix]

Enforce the absence of a package

package(name).absent(version, architecture, provider)
  • architecture: Architecture of the package, can be an architecture name or "default" (defaults to "default")

    • can be empty

  • provider: Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")

    • empty, default, yum, apt, zypper, zypper_pattern, slackpkg, pkg, ips, nimclient

  • version: Version of the package or "any" for any version (defaults to "any")

    • can be empty

See package_state for documentation.


verify_version [unix] - DEPRECATED

Verify if a package is installed in a specific version

package(name).verify_version(version)
  • version: Version of the package to verify (can be "latest" for latest version)


verify [unix] - DEPRECATED

Verify if a package is installed in its latest version available

package(name).verify()

remove [unix] - DEPRECATED

Remove a package

package(name).remove()

Example:

methods:
    "any" usebundle => package_remove("htop");

install_version_cmp_update [unix] - DEPRECATED

Install a package or verify if it is installed in a specific version, or higher or lower version than a version specified, optionally test update or not (Debian-, Red Hat- or SUSE-like systems only)

package(name).install_version_cmp_update(version_comparator, package_version, action, update_policy)
  • action: Action to perform, can be add, verify (defaults to verify)

  • package_version: The version of the package to verify (can be "latest" for latest version)

  • update_policy: While verifying packages, check against latest version ("true") or just installed ("false")

  • version_comparator: Comparator between installed version and defined version, can be ==,⇐,>=,<,>,!=

    • ==, , >=, <, >, !=

Example:

methods:
    "any" usebundle => package_install_version_cmp_update("postgresql", ">=", "9.1", "verify", "false");

install_version_cmp [unix] - DEPRECATED

Install a package or verify if it is installed in a specific version, or higher or lower version than a version specified

package(name).install_version_cmp(version_comparator, package_version, action)
  • action: Action to perform, can be add, verify (defaults to verify)

  • package_version: The version of the package to verify (can be "latest" for latest version)

  • version_comparator: Comparator between installed version and defined version, can be ==,⇐,>=,<,>,!=

    • ==, , >=, <, >, !=

Example:

methods:
    "any" usebundle => package_install_version_cmp("postgresql", ">=", "9.1", "verify");

install_version [unix] - DEPRECATED

Install or update a package in a specific version

package(name).install_version(package_version)
  • package_version: Version of the package to install (can be "latest" to install it in its latest version)


install [unix] - DEPRECATED

Install or update a package in its latest version available

package(name).install()

permissions

resource permissions(path)
  • path: Path of the file or directory

States

value [unix]

Set permissions on a file or directory (non recursively)

permissions(path).value(mode, owner, group)
  • group: Group to enforce (like "wheel")

    • can be empty

  • mode: Mode to enforce (like "640")

    • can be empty

  • owner: Owner to enforce (like "root")

    • can be empty


user_acl_present [unix]

Verify that an ace is present on a file or directory for a given user. This method will make sure the given ace is present in the POSIX ACL of the target.

permissions(path).user_acl_present(recursive, user, ace)
  • ace: ACE to enforce for the given user.

    • must match: ^[+-=]?(?=.*[rwx])r?w?x?$

  • recursive: Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false").

    • empty, true, false

  • user: Username of the Linux account.

The permissions_*acl_* manage the POSIX ACL on files and directories.

Please note that the mask will be automatically recalculated when editing ACLs.

Parameters

Path

Path can be globbing with the following format:

  • matches any filename or directory at one level, e.g. .cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.

  • ? matches a single letter

  • [a-z] matches any letter from a to z

  • {x,y,anything} will match x or y or anything.

Recursive

Can be:

  • true to apply the given aces to folder and sub-folders and files.

  • or false to apply to the strict match of Path

If left blank, recursivity will automatically be set to false

User

Username to enforce the ace, being the Linux account name. This method can only handle one username.

ACE

The operator can be:

  • + to add the given ACE to the current ones.

  • - to remove the given ACE to the current ones.

  • = to force the given ACE to the current ones.

  • empty if no operator is specified, it will be interpreted as =.

ACE must respect the classic:

  • ^[+-=]?(?=.*[rwx])r?w?x?$

Example

Given a file with the following getfacl output:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
user:bob:rwx
group::r--
mask::rwx
other::---

Applying this method with the following parameters:

  • path: /tmp/myTestFile

  • recursive: false

  • user: bob

  • ace: -rw

Will transform the previous ACLs in:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
user:bob:--x
group::r--
mask::r-x
other::---

user_acl_absent [unix]

Verify that an ace is absent on a file or directory for a given user. This method will make sure that no ace is present in the POSIX ACL of the target.

permissions(path).user_acl_absent(recursive, user)
  • recursive: Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")

    • empty, true, false

  • user: Username of the Linux account.

The permissions_*acl_* manage the POSIX ACL on files and directories.

Please note that the mask will be automatically recalculated when editing ACLs.

Parameters

Path

Path can be a regex with the following format:

  • matches any filename or directory at one level, e.g. .cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.

  • ? matches a single letter

  • [a-z] matches any letter from a to z

  • {x,y,anything} will match x or y or anything.

Recursive

Can be:

  • true to apply the given aces to folder and sub-folders and files.

  • or false to apply to the strict match of Path

If left blank, recursivity will automatically be set to false

User

Username to enforce the ace absence, being the Linux account name. This method can only handle one username.

Example

Given a file with the following getfacl output:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
user:bob:rwx
group::r--
mask::rwx
other::---

Applying this method with the following parameters:

  • path: /tmp/myTestFile

  • recursive: false

  • user: bob

Will transform the previous ACLs in:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
group::r--
mask::r--
other::---

type_recursion [unix]

Ensure that a file or directory is present and has the right mode/owner/group

permissions(path).type_recursion(mode, owner, group, type, recursion)
  • group: Group of the path to edit

    • can be empty

  • mode: Mode of the path to edit

    • can be empty

  • owner: Owner of the path to edit

    • can be empty

  • recursion: Recursion depth to enforce for this path (0, 1, 2, …​, inf)

  • type: Type of the path to edit (all/files/directories)

The method ensure that all files|directories|files and directories have the correct owner, group owner and permissions.

The parameter type can be either: "all", "files" or "directories". The parameter recursion can be either: "0,1,2,3,…​. inf" The level of recursion is the maximum depth of subfolder that will be managed by the method:

  • 0 being the current folder/file

  • 1 being the current folder/file and its subfolders

  • ..

  • inf being the file or the whole folder tree


recursive [unix]

Verify if a file or directory has the right permissions recursively

permissions(path).recursive(mode, owner, group)
  • group: Group to enforce

    • can be empty

  • mode: Mode to enforce

    • can be empty

  • owner: Owner to enforce

    • can be empty

The method ensures that all files and directories under path have the correct owner, group owner and permissions.

This method is in fact a call to the permissions_type_recursion method with "all" type and "inf" recursion.


posix_acls_absent [unix]

Ensure that files or directories has no ACLs set

permissions(path).posix_acls_absent(recursive)
  • recursive: Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")

    • empty, true, false

The permissions_*acl_* manage the POSIX ACL on files and directories.

Parameters

Path

Path can be globbing with the following format:

  • ** matches any filename or directory at one level, e.g. *.cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.

  • ? matches a single letter

  • [a-z] matches any letter from a to z

  • \{x,y,anything} will match x or y or anything.

Recursive

Can be:

  • true to apply the given aces to folder and sub-folders and files.

  • or false to apply to the strict match of Path

If left blank, recursivity will automatically be set to false

Example

The method has basically the same effect as setfacl -b <path>.

Given a file with the following getfacl output:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
user:vagrant:rwx
group::r--
mask::rwx
other::---

It will remove all ACLs, and only let classic rights, here:

root@server# getfacl myTestFile
# file: myTestFile
# owner: root
# group: root
user::rwx
group::r--
other::---

root@server# ls -l myTestFile
-rwxr----- 1 root root 0 Mar 22 11:24 myTestFile
root@server#

other_acl_present [unix]

Verify that the other ace given is present on a file or directory. This method will make sure the given other ace is present in the POSIX ACL of the target for.

permissions(path).other_acl_present(recursive, other)
  • other: ACE to enforce for the given other.

    • must match: ^[+-=]?(?=.*[rwx])r?w?x?$

  • recursive: Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")

    • empty, true, false

The permissions_*acl_* manage the POSIX ACL on files and directories.

Please note that the mask will be automatically recalculated when editing ACLs.

Parameters

Path

Path can be a regex with the following format:

  • matches any filename or directory at one level, e.g. .cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.

  • ? matches a single letter

  • [a-z] matches any letter from a to z

  • {x,y,anything} will match x or y or anything.

Recursive

Can be:

  • true to apply the given aces to folder and sub-folders and files.

  • or false to apply to the strict match of Path

If left blank, recursivity will automatically be set to false

Other_ACE

The operator can be:

  • + to add the given ACE to the current ones.

  • - to remove the given ACE to the current ones.

  • = to force the given ACE to the current ones.

  • empty if no operator is specified, it will be interpreted as =.

ACE must respect the classic:

  • ^[+-=]?(?=.*[rwx])r?w?x?$

Example

Given a file with the following getfacl output:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
user:bob:rwx
group::r--
mask::rwx
other::r-x

Applying this method with the following parameters:

  • path: /tmp/myTestFile

  • recursive: false

  • other ace: -rw

Will transform the previous ACLs in:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
user:bob:rwx
group::r--
mask::rwx
other::--x

ntfs [windows]

Ensure NTFS permissions on a file for a given user.

permissions(path).ntfs(user, rights, accesstype, propagationpolicy)
  • accesstype: "Allow" or "Deny"

    • Allow, Deny, empty

  • propagationpolicy: Define the propagation policy of the access rule that Rudder is applying

    • ThisFolderOnly, ThisFolderSubfoldersAndFiles, ThisFolderAndSubfolders, ThisFolderAndFiles, SubfoldersAndFilesOnly, SubfoldersOnly, FilesOnly, empty

  • rights: Comma separated right list

  • user: DOMAIN\Account

Ensure that the correct NTFS permissions are applied on a file for a given user.

Inheritance and propagation flags can also be managed. If left blank, no propagation will be set.

To manage effective propagation or effective access, please disable the inheritance on the file before applying this generic method.

Note: that the Synchronize permission may not work in some cases. This is a known bug.

Right validate set:

None, ReadData, ListDirectory, WriteData, CreateFiles, AppendData, CreateDirectories, ReadExtendedAttributes, WriteExtendedAttributes, ExecuteFile, Traverse, DeleteSubdirectoriesAndFiles, ReadAttributes, WriteAttributes, Write, Delete, ReadPermissions, Read, ReadAndExecute, Modify, ChangePermissions, TakeOwnership, Synchronize, FullControl

AccessType validate set:

Allow, Deny

PropagationPolicy validate set:

ThisFolderOnly, ThisFolderSubfoldersAndFiles, ThisFolderAndSubfolders, ThisFolderAndFiles, SubfoldersAndFilesOnly, SubfoldersOnly, FilesOnly


group_acl_present [unix]

Verify that an ace is present on a file or directory for a given group. This method will make sure the given ace is present in the POSIX ACL of the target for the given group.

permissions(path).group_acl_present(recursive, group, ace)
  • ace: ACE to enforce for the given group.

    • must match: ^[+-=]?(?=.*[rwx])r?w?x?$

  • group: Group name

  • recursive: Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")

    • empty, true, false

The permissions_*acl_* manage the POSIX ACL on files and directories.

Please note that the mask will be automatically recalculated when editing ACLs.

Parameters

Path

Path can be a regex with the following format:

  • matches any filename or directory at one level, e.g. .cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.

  • ? matches a single letter

  • [a-z] matches any letter from a to z

  • {x,y,anything} will match x or y or anything.

Recursive

Can be:

  • true to apply the given aces to folder and sub-folders and files.

  • or false to apply to the strict match of Path

If left blank, recursivity will automatically be set to false

Group

Group to enfoorce the ace, being the Linux account name. This method can only handle one groupname.

ACE

The operator can be:

  • + to add the given ACE to the current ones.

  • - to remove the given ACE to the current ones.

  • = to force the given ACE to the current ones.

  • empty if no operator is specified, it will be interpreted as =.

ACE must respect the classic:

  • ^[+-=]?(?=.*[rwx])r?w?x?$

Example

Given a file with the following getfacl output:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
group::r--
group:bob:rwx
mask::rwx
other::---

Applying this method with the following parameters:

  • path: /tmp/myTestFile

  • recursive: false

  • group: bob

  • ace: -rw

Will transform the previous ACLs in:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
group::r--
group:bob:--x
mask::r-x
other::---

group_acl_absent [unix]

Verify that an ace is absent on a file or directory for a given group. This method will make sure that no ace is present in the POSIX ACL of the target.

permissions(path).group_acl_absent(recursive, group)
  • group: Group name

  • recursive: Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")

    • empty, true, false

The permissions_*acl_* manage the POSIX ACL on files and directories.

Please note that the mask will be automatically recalculated when editing ACLs.

Parameters

Path

Path can be a regex with the following format:

  • matches any filename or directory at one level, e.g. .cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.

  • ? matches a single letter

  • [a-z] matches any letter from a to z

  • {x,y,anything} will match x or y or anything.

Recursive

Can be:

  • true to apply the given aces to folder and sub-folders and files.

  • or false to apply to the strict match of Path

If left blank, recursivity will automatically be set to false

User

Username to enforce the ace absence, being the Linux account name. This method can only handle one groupname.

Example

Given a file with the following getfacl output:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
group::r--
group:bob:rwx
mask::rwx
other::---

Applying this method with the following parameters:

  • path: /tmp/myTestFile

  • recursive: false

  • group: bob

Will transform the previous ACLs in:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
group::r--
mask::r--
other::---

dirs_recursive [unix]

Verify if a directory has the right permissions recursively

permissions(path).dirs_recursive(mode, owner, group)
  • group: Group to enforce

    • can be empty

  • mode: Mode to enforce

    • can be empty

  • owner: Owner to enforce

    • can be empty


dirs [unix]

Verify if a directory has the right permissions non recursively

permissions(path).dirs(mode, owner, group)
  • group: Group to enforce

    • can be empty

  • mode: Mode to enforce

    • can be empty

  • owner: Owner to enforce

    • can be empty


acl_entry [unix]

Verify that an ace is present on a file or directory. This method will append the given aces to the current POSIX ACLs of the target.

permissions(path).acl_entry(recursive, user, group, other)
  • group: Group acls, comma separated, like: wheel:+wx, anon:-rwx

    • can be empty, must match: $|(([A-z0-9._-]+|\*):([+-=]r?w?x?)?,? *)+$

  • other: Other acls, like -x

    • can be empty, must match: $|[+-=^]r?w?x?$

  • recursive: Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")

    • empty, true, false

  • user: User acls, comma separated, like: bob:+rwx, alice:-w

    • can be empty, must match: $|(([A-z0-9._-]+|\*):([+-=]r?w?x?)?,? *)+$

The permissions_*acl_* manage the POSIX ACL on files and directories.

Please note that the mask will be automatically recalculated when editing ACLs.

Parameters

Path

Path can be a regex with the following format:

  • matches any filename or directory at one level, e.g. .cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.

  • ? matches a single letter

  • [a-z] matches any letter from a to z

  • {x,y,anything} will match x or y or anything.

Recursive

Can be:

  • true to apply the given aces to folder and sub-folders and files.

  • or false to apply to the strict match of Path

If left blank, recursivity will automatically be set to false

User and Group

ACE for user and group can be left blank if they do not need any specification. If fulfill, they must respect the format:

<username|groupname>:<operator><mode>

with:

  • username being the Linux account name

  • groupname the Linux group name

  • Current owner user and owner group can be designed by the character *

The operator can be:

  • + to add the given ACE to the current ones.

  • - to remove the given ACE to the current ones.

  • = to force the given ACE to the current ones.

You can define multiple ACEs by separating them with commas.

Other

ACE for other must respect the classic:

  • [+-=]r?w?x? It can also be left blank to let the Other ACE unchanged.

Example

Given a file with the following getfacl output:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rwx
user:bob:rwx
group::r--
mask::rwx
other::---

Applying this method with the following parameters:

  • path: /tmp/myTestFile

  • recursive: false

  • user: *:-x, bob:

  • group: *:+rw

  • other: =r

Will transform the previous ACLs in:

root@server# getfacl /tmp/myTestFile
getfacl: Removing leading '/' from absolute path names
# file: tmp/myTestFile
# owner: root
# group: root
user::rw-
user:bob:---
group::rw-
mask::rw-
other::r--

This method can not remove a given ACE, see here how the user bob ACE is handled.


registry_entry

resource registry_entry(key, entry)
  • entry: Registry entry

  • key: Registry key (ie, HKLM:\Software\Rudder)

States

present [windows]

This generic method defines if a registry entry exists with the correct value

registry_entry(key, entry).present(value, registryType)
  • registryType: Registry value type (String, ExpandString, MultiString, Dword, Qword)

  • value: Registry value


absent [windows]

This generic method checks that a registry entry does not exists

registry_entry(key, entry).absent()

registry_key

resource registry_key(key)
  • key: Registry key (ie, HKLM:\Software\Rudder)

States

present [windows]

This generic method checks that a Registry Key exists

registry_key(key).present()

Create a Registry Key if it does not exist. There are two different supported syntaxes to describe a Registry Key:

  • with short drive name and ":" like HKLM:\SOFTWARE\myKey

  • with long drive name and without ":" like HKEY_LOCAL_MACHINE:\SOFTWARE\myKey

Please, note that Rudder can not create new drive and new "first-level" Registry Keys.


absent [windows]

This generic method checks that a registry key does not exists

registry_key(key).absent()

Remove a Registry Key if it is present on the system.

There are two different supported syntaxes to describe a Registry Key:

  • with short drive name and ":" like HKLM:\SOFTWARE\myKey

  • with long drive name and without ":" like HKEY_LOCAL_MACHINE:\SOFTWARE\myKey

Please, note that Rudder can not remove drives and "first-level" Registry Keys.


report

resource report(report_message)
  • report_message: Message subject, will be extended based on the report status

States

if_condition [unix]

Report a Rudder report based on a condition.

report(report_message).if_condition(condition)
  • condition: Condition to report a success

This method will only send a Rudder report:

If the condition is met, it will report a compliant report, with the following message: <report_message> was correct.

Otherwise, it will report an error, with the following message: report_message was incorrect

This method will never be in a repaired state.


schedule

resource schedule(job_id)
  • job_id: A string to identify this job

States

simple_stateless [unix]

Trigger a repaired outcome when a job should be run (without checks)

schedule(job_id).simple_stateless(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days)
  • agent_periodicity: Agent run interval (in minutes)

  • max_execution_delay_hours: On how many hours you want to spread the job

  • max_execution_delay_minutes: On how many minutes you want to spread the job

  • periodicity_days: Desired job run interval (in days)

  • periodicity_hours: Desired job run interval (in hours)

  • periodicity_minutes: Desired job run interval (in minutes)

  • start_on_day_of_week: At which day of week should be the first run

  • start_on_hours: At which hour should be the first run

  • start_on_minutes: At which minute should be the first run

This bundle will define a condition schedule_simple_${job_id}_{kept,repaired,not_ok,ok,reached}

  • _ok or _kept for when there is nothing to do

  • _repaired if the job should run

  • _not_ok and _reached have their usual meaning

No effort is done to check if a run has already been done for this period or not. If the agent is run twice, the job will be run twice, and if the agent is not run, the job will no be run.


simple_nodups [unix]

Trigger a repaired outcome when a job should be run (avoid running twice)

schedule(job_id).simple_nodups(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days)
  • agent_periodicity: Agent run interval (in minutes)

  • max_execution_delay_hours: On how many hours you want to spread the job

  • max_execution_delay_minutes: On how many minutes you want to spread the job

  • periodicity_days: Desired job run interval (in days)

  • periodicity_hours: Desired job run interval (in hours)

  • periodicity_minutes: Desired job run interval (in minutes)

  • start_on_day_of_week: At which day of week should be the first run

  • start_on_hours: At which hour should be the first run

  • start_on_minutes: At which minute should be the first run

This bundle will define a condition schedule_simple_${job_id}_{kept,repaired,not_ok,ok,reached}

  • _ok or _kept for when there is nothing to do

  • _repaired if the job should run

  • _not_ok and _reached have their usual meaning

If the agent is run twice (for example from a manual run), the jo is run only once. However if the agent run is skipped during the period, the job is never run.


simple_catchup [unix]

Trigger a repaired outcome when a job should be run (avoid losing a job)

schedule(job_id).simple_catchup(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days)
  • agent_periodicity: Agent run interval (in minutes)

  • max_execution_delay_hours: On how many hours you want to spread the job

  • max_execution_delay_minutes: On how many minutes you want to spread the job

  • periodicity_days: Desired job run interval (in days)

  • periodicity_hours: Desired job run interval (in hours)

  • periodicity_minutes: Desired job run interval (in minutes)

  • start_on_day_of_week: At which day of week should be the first run

  • start_on_hours: At which hour should be the first run

  • start_on_minutes: At which minute should be the first run

This bundle will define a condition schedule_simple_${job_id}_{kept,repaired,not_ok,ok,reached}

  • _ok or _kept for when there is nothing to do

  • _repaired if the job should run

  • _not_ok and _reached have their usual meaning

If the agent run is skipped during the period, method tries to catchup the run on next agent run. If the agent run is skipped twice,, only one run is catched up. If the agent is run twice (for example from a manual run), the job is run only once.


simple [unix]

Trigger a repaired outcome when a job should be run

schedule(job_id).simple(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days, mode)
  • agent_periodicity: Agent run interval (in minutes)

  • max_execution_delay_hours: On how many hours you want to spread the job

  • max_execution_delay_minutes: On how many minutes you want to spread the job

  • mode: "nodups": avoid duplicate runs in the same period / "catchup": avoid duplicates and one or more run have been missed, run once before next period / "stateless": no check is done on past runs

  • periodicity_days: Desired job run interval (in days)

  • periodicity_hours: Desired job run interval (in hours)

  • periodicity_minutes: Desired job run interval (in minutes)

  • start_on_day_of_week: At which day of week should be the first run

  • start_on_hours: At which hour should be the first run

  • start_on_minutes: At which minute should be the first run

This method compute the expected time for running the job, based on the parameters and splayed uing system ids, and define a conditions based on this computation:

  • schedule_simple_${job_id}_kept if the job should not be run now

  • schedule_simple_${job_id}_repaired if the job should be run

  • schedule_simple_${job_id}_error if their is an inconsistency in the method parameters

Example

If you want to run a job, at every hour and half-hour (0:00 and 0:30), with no spread across system, with an agent running with default schedule of 5 minutes, and making sure that the job is run (if the agent couldn’t run it, then at the next agent execution the job should be run), you will call the method with the following parameters:

schedule_simple("job_schedule_id", "5", "0", "0",  "0", "0", "0",  "30", "0", "0", "catchup")

During each run right after o’clock and half-hour, this method will define the condition schedule_simple_job_schedule_id_repaired, that you can use as a condition for a generic method command_execution


service

resource service(name)
  • name: Service name (as recognized by systemd, init.d, etc…​)

States

stopped [windows, unix]

Ensure that a service is stopped using the appropriate method

service(name).stopped()

status [windows]

This generic method defines if service should run or be stopped

service(name).status(status)
  • status: Desired state for the user - can be 'Stopped' or 'Running'

    • Stopped, Running


started_path [unix]

Ensure that a service is running using the appropriate method, specifying the path of the service in the ps output, or using Windows task manager

service(name).started_path(path)
  • path: Service with its path, as in the output from 'ps'


started [windows, unix]

Ensure that a service is running using the appropriate method

service(name).started()

restart [windows, unix]

Restart a service using the appropriate method

service(name).restart()

See service_action for documentation.


reload [unix]

Reload a service using the appropriate method

service(name).reload()

See service_action for documentation.


enabled [windows, unix]

Force a service to be started at boot

service(name).enabled()

disabled [unix]

Force a service not to be enabled at boot

service(name).disabled()

check_started_at_boot [unix]

Check if a service is set to start at boot using the appropriate method

service(name).check_started_at_boot()

check_running_ps [unix]

Check if a service is running using ps

service(name).check_running_ps()

check_running [unix]

Check if a service is running using the appropriate method

service(name).check_running()

check_disabled_at_boot [unix]

Check if a service is set to not start at boot using the appropriate method

service(name).check_disabled_at_boot()

action [unix]

Trigger an action on a service using the appropriate tool

service(name).action(action)
  • action: Action to trigger on the service (start, stop, restart, reload, …​)

The service_* methods manage the services running on the system.

Parameters

Service name

The name of the service is the name understood by the service manager, except for the is-active-process action, where it is the regex to match against the running processes list.

Action

The action is the name of an action to run on the given service. The following actions can be used:

  • start

  • stop

  • restart

  • reload (or refresh)

  • is-active (or status)

  • is-active-process (in this case, the "service" parameter is the regex to match againt process list)

  • enable

  • disable

  • is-enabled

Other actions may also be used, depending on the selected service manager.

Implementation

These methods will detect the method to use according to the platform. You can run the methods with an info verbosity level to see which service manager will be used for a given action.

Warning
Due to compatibility issues when mixing calls to systemctl and service/init.d, when an init script exists, we will not use systemctl compatibility layer but directly service/init.d.

The supported service managers are:

  • systemd (any unknown action will be passed directly)

  • upstart

  • smf (for Solaris)

  • service command (for non-boot actions, any unknown action will be passed directly)

  • /etc/init.d scripts (for non-boot actions, any unknown action will be passed directly)

  • SRC (for AIX) (for non-boot actions)

  • chkconfig (for boot actions)

  • update-rc.d (for boot actions)

  • chitab (for boot actions)

  • links in /etc/rcX.d (for boot actions)

  • Windows services

Examples
# To restart the apache2 service
service_action("apache2", "restart");
service_restart("apache2");

stop [unix] - DEPRECATED

Stop a service using the appropriate method

service(name).stop()

See service_action for documentation.


start [unix] - DEPRECATED

Start a service using the appropriate method

service(name).start()

See service_action for documentation.


restart_if [unix] - DEPRECATED

Restart a service using the appropriate method if the specified class is true, otherwise it is considered as not required and success classes are returned.

service(name).restart_if(expression)
  • expression: Condition expression which will trigger the restart of Service "(package_service_installed|service_conf_changed)" by example

See service_action for documentation.


sharedfile

resource sharedfile(remote_node, file_id)
  • file_id: Unique name that will be used to identify the file on the receiver

    • must match: ^[A-z0-9._-]+$

  • remote_node: Which node to share the file with

States

to_node [unix]

This method shares a file with another Rudder node

sharedfile(remote_node, file_id).to_node(file_path, ttl)
  • file_path: Path of the file to share

  • ttl: Time to keep the file on the policy server in seconds or in human readable form (see long description)

    • must match: ^(\d+\s*(days?|d))?(\d+\s*(hours?|h))?(\d+\s*(minutes?|m))?(\d+\s*(seconds?|s))?$

This method shares a file with another Rudder node using a unique file identifier.

Read the Rudder documentation for a high level overview of file sharing between nodes.

The file will be kept on the policy server and transmitted to the destination node’s policy server if it is different. It will be kept on this server for the destination node to download as long as it is not replaced by a new file with the same id or remove by expiration of the TTL.

Parameters

This section describes the generic method parameters.

remote_node

The node you want to share this file with. The uuid of a node is visible in the Nodes details (in the Web interface) or by entering rudder agent info on the target node.

file_id

This is a name that will be used to identify the file in the target node. It should be unique and describe the file content.

file_path

The local absolute path of the file to share.

ttl

The TTL can be:

  • A simple integer, in this case it is assumed to be a number of seconds

  • A string including units indications, the possible units are:

  • days, day or d

  • hours, hour, or h

  • minutes, minute, or m

  • seconds, second or s

The ttl value can look like 1day 2hours 3minutes 4seconds or can be abbreviated in the form 1d 2h 3m 4s, or without spaces 1d2h3m4s or any combination like 1day2h 3minute 4seconds Any unit can be skipped, but the decreasing order needs to be respected.

file_id

This is a name that will be used to identify the file once stored on the server. It should be unique and describe the file content.

Example:

We have a node A, with uuid 2bf1afdc-6725-4d3d-96b8-9128d09d353c which wants to share the /srv/db/application.properties with node B with uuid 73570beb-2d4a-43d2-8ffc-f84a6817849c.

We want this file to stay available for one year for node B on its policy server.

The node B wants to download it into /opt/application/etc/application.properties.

They have to agree (i.e. it has to be defined in the policies of both nodes) on the id of the file, that will be used during the exchange, here it will be application.properties.

To share the file, node A will use:

sharedfile_to_node("73570beb-2d4a-43d2-8ffc-f84a6817849c", "application.properties", "/srv/db/application.properties", "356 days")

To download the file, node B will use sharedfile_from_node with:

sharedfile_from_node("2bf1afdc-6725-4d3d-96b8-9128d09d353c", "application.properties", "/opt/application/etc/application.properties")

from_node [unix]

This method retrieves a file shared from another Rudder node

sharedfile(remote_node, file_id).from_node(file_path)
  • file_path: Where to put the file content

This method retrieves a file shared from a Rudder node using a unique file identifier.

The file will be downloaded using native agent protocol and copied into a new file. The destination path must be the complete absolute path of the destination file.

See sharedfile_to_node for a complete example.


sysctl

resource sysctl(key)
  • key: The key to enforce

States

value [unix]

Enforce a value in sysctl (optionally increase or decrease it)

sysctl(key).value(value, filename, option)
  • filename: File name where to put the value in /etc/sysctl.d (without the .conf extension)

  • option: Optional modifier on value: Min, Max or Default (default value)

    • can be empty

  • value: The desired value

Enforce a value in sysctl

Behaviors

Checks for the current value defined for the given key If it is not set, this method attempts to set it in the file defined as argument If it is set, and corresponds to the desired value, it will success If it is set, and does not correspond, the value will be set in the file defined, sysctl configuration is reloaded with sysctl --system and the resulting value is checked. If it is not taken into account by sysctl because its overridden in another file or its an invalid key, the method returns an error

Prerequisite

This method requires an /etc/sysctl.d folder, and the sysctl --system option. It does not support Debian 6 or earlier, CentOS/RHEL 6 or earlier, SLES 11 or earlier, Ubuntu 12_04 or earlier, AIX and Solaris.

Parameters

key : the key to enforce/check value : the expected value for the key filename : filename (without extension) containing the key=value when need to be set, within /etc/sysctl.d. This method adds the correct extension at the end of the filename Optional parameter: min: The value is the minimal value we request. the value is only changed if the current value is lower than value max: The value is the maximal value we request: the value is only changed if the current value is higher than value default (default value): The value is strictly enforced.

Comparison is numerical if possible, else alphanumerical So 10 > 2, but Test10 < Test2

Examples

To ensure that swappiness is disabled, and storing the configuration parameter in 99_rudder.conf

 sysctl_value("vm.swappiness", "99_rudder", "0", "")

To ensure that the UDP buffer is at least 26214400

 sysctl_value("net.core.rmem_max", "99_rudder", "26214400", "min")

sysinfo

resource sysinfo(query)
  • query: The query to execute (ending with a semicolon)

States

query [unix]

Audit a system property through osquery

sysinfo(query).query(comparator, value)
  • comparator: The comparator to use ('=', '!=' or '~', default is '=')

    • empty, =, !=, ~

  • value: The expected value

This method uses osquery to fetch information about the system, and compares the value with the given one, using the provided comparator.

Parameters
  • query is an osquery query returning exactly one result

  • comparator is the comparator to use: "=" for equality, "!=" for non-equality, "~" for regex comparison

  • value is the expected value, can be a string or a regex depending on the comparator

Setup

This method requires the presence of osquery on the target nodes. It won’t install it automatically. Check the correct way of doing so for your OS.

Building queries

To learn about the possible queries, read the osquery schema for your osquery version.

You can test the queries before using them with the osqueryi command, see the example below.

osqueryi "select cpu_logical_cores from system_info;"

You need to provide a query that returns exactly one value. If it’s not the case, the method will fail as it does not know what to check.

Examples
# To check the number of cpus on the machine
audit_from_osquery("select cpu_logical_cores from system_info;", "2");

Will report a compliant report if the machine has 3 cores, and a non compliant one if not.


technique_simplest

rudderlang simplest for a complete loop

resource technique_simplest()

States

technique
technique_simplest().technique()

user

resource user(login)
  • login: User’s login

States

uid [unix]

Define the uid of the user. User must already exists, uid must be non-allowed(unique).

user(login).uid(uid)
  • uid: User’s uid

This method does not create the user.


status [windows]

This generic method defines if user is present or absent

user(login).status(status)
  • status: Desired state for the user - can be 'Present' or 'Absent'

    • Present, Absent


shell [unix]

Define the shell of the user. User must already exist.

user(login).shell(shell)
  • shell: User’s shell

This method does not create the user. entry example: /bin/false


primary_group [unix]

Define the primary group of the user. User must already exist.

user(login).primary_group(primary_group)
  • primary_group: User’s primary group

This method does not create the user.


present [windows, unix]

Ensure a user exists on the system.

user(login).present()

This method does not create the user’s home directory. Primary group will be created and set with default one, following the useradd default behavior. As in most UNIX system default behavior user creation will fail if a group with the user name already exists.


password_hash [unix]

Ensure a user’s password. Password must respect $id$salt$hashed format as used in the UNIX /etc/shadow file.

user(login).password_hash(password)
  • password: User hashed password

User must exists, password must be pre-hashed. Does not handle empty password accounts. See UNIX /etc/shadow format. entry example: $1$jp5rCMS4$mhvf4utonDubW5M00z0Ow0

An empty password will lead to an error and be notified.


password_clear [windows]

Ensure a user’s password. as used in the UNIX /etc/shadow file.

user(login).password_clear(password)
  • password: User clear password

User must exists, password will appear in clear text in code. An empty password will lead to an error and be notified.


locked [unix]

Ensure the user is locked. User must already exist.

user(login).locked()

This method does not create the user. Note that locked accounts will be marked with "!" in /etc/shadow, which is equivalent to "*". To unlock a user, apply a user_password method.


home [unix]

Define the home of the user. User must already exists.

user(login).home(home)
  • home: User’s home

This method does not create the user, nor the home directory. entry example: /home/myuser The home given will be set, but not created.


group [unix]

Define secondary group for a user

user(login).group(group_name)
  • group_name: Secondary group name for the user

Ensure that a user is within a group

Behavior

Ensure that the user belongs in the given secondary group (non-exclusive)

Parameters

login : the user login group_name: secondary group name the user should belong to (non-exclusive)

Examples

To ensure that user test belongs in group dev

 user_group("test", "dev")

Note that it will make sure that user test is in group dev, but won’t remove it from other groups it may belong to


fullname [unix]

Define the fullname of the user, user must already exists.

user(login).fullname(fullname)
  • fullname: User’s fullname

This method does not create the user.


absent [windows, unix]

Remove a user

user(login).absent()

This method ensures that a user does not exist on the system.


create [unix] - DEPRECATED

Create a user

user(login).create(description, home, group, shell, locked)
  • description: User description

  • group: User’s primary group

  • home: User’s home directory

  • locked: Is the user locked ? true or false

  • shell: User’s shell

This method does not create the user’s home directory.


variable

resource variable(prefix, name)
  • name: The variable to define, the full name will be prefix.name

  • prefix: The prefix of the variable name

States

string_from_math_expression [unix]

Define a variable from a mathematical expression

variable(prefix, name).string_from_math_expression(expression, format)
  • expression: The mathematical expression to evaluate

  • format: The format string to use

To use the generated variable, you must use the form ${prefix.name} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.

Usage

This function will evaluate a mathematical expression that may contain variables and format the result according to the provided format string.

The formatting string uses the standard POSIX printf format.

Supported mathematical expressions

All the mathematical computations are done using floats.

The supported infix mathematical syntax, in order of precedence, is:

  • ( and ) parentheses for grouping expressions

  • ^ operator for exponentiation

  • * and / operators for multiplication and division

  • % operators for modulo operation

  • + and - operators for addition and subtraction

  • == "close enough" operator to tell if two expressions evaluate to the same number, with a tiny margin to tolerate floating point errors. It returns 1 or 0.

  • >= "greater or close enough" operator with a tiny margin to tolerate floating point errors. It returns 1 or 0.

  • > "greater than" operator. It returns 1 or 0.

  • "less than or close enough" operator with a tiny margin to tolerate floating point errors. It returns 1 or 0.

  • < "less than" operator. It returns 1 or 0.

The numbers can be in any format acceptable to the C scanf function with the %lf format specifier, followed by the k, m, g, t, or p SI units. So e.g. -100 and 2.34m are valid numbers.

In addition, the following constants are recognized:

  • e: 2.7182818284590452354

  • log2e: 1.4426950408889634074

  • log10e: 0.43429448190325182765

  • ln2: 0.69314718055994530942

  • ln10: 2.30258509299404568402

  • pi: 3.14159265358979323846

  • pi_2: 1.57079632679489661923 (pi over 2)

  • pi_4: 0.78539816339744830962 (pi over 4)

  • 1_pi: 0.31830988618379067154 (1 over pi)

  • 2_pi: 0.63661977236758134308 (2 over pi)

  • 2_sqrtpi: 1.12837916709551257390 (2 over square root of pi)

  • sqrt2: 1.41421356237309504880 (square root of 2)

  • sqrt1_2: 0.70710678118654752440 (square root of 1/2)

The following functions can be used, with parentheses:

  • ceil and floor: the next highest or the previous highest integer

  • log10, log2, log

  • sqrt

  • sin, cos, tan, asin, acos, atan

  • abs: absolute value

  • step: 0 if the argument is negative, 1 otherwise

Formatting options

The format field supports the following specifiers:

  • %d for decimal integer

  • %x for hexadecimal integer

  • %o for octal integer

  • %f for decimal floating point

You can use usual flags, width and precision syntax.

Examples

If you use:

variable_string("prefix", "var", "10");
variable_string_from_math_expression("prefix", "sum", "2.0+3.0", "%d");
variable_string_from_math_expression("prefix", "product", "3*${prefix.var}", "%d");

The prefix.sum string variable will contain 5 and prefix.product will contain 30.


string_from_file [windows, unix]

Define a variable from a file content

variable(prefix, name).string_from_file(file_name)
  • file_name: The path of the file

To use the generated variable, you must use the form ${variable_prefix.variable_name} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.


string_from_command [windows, unix]

Define a variable from a command output

variable(prefix, name).string_from_command(command)
  • command: The command to execute

Define a variable from a command output. The method will execute a shell command and define a variable ${prefix.name} from it.

  • Only stdout is kept

  • The variable will only be defined if the exit code of the command is 0

  • If the variable definition is successful, the method will report a success, it will report an error otherwise.

  • The command will be executed even in Audit mode


string_from_augeas []

Use Augeas binaries to call Augtool commands and options to get a node label’s value.

variable(prefix, name).string_from_augeas(path, lens, file)
  • file: The absolute path to the file specified by the user in case he wants to load a specified file associated with its lens

    • can be empty

  • lens: The lens specified by the user in case he wants to load a specified lens associated with its file

    • can be empty

  • path: The path to the file and node label

Augeas is a tool that provides an abstraction layer for all the complexities that turn around editing files with regular expressions. It’s a tree based hierarchy tool, that handle system configuration files where you can securely modify your files. To do so you have to provide the path to the node label’s value.

This method aims to use augtool to extract a specific information from a configuration file into a rudder variable. If Augeas is not installed on the agent, or if it fails to execute, it will produces an error.

  • variable prefix: target variable prefix

  • variable name: target variable name

  • path: augeas node path, use to describe the location of the target information we want to extract

  • lens: augeas lens to use, optional

  • file: absolute file path to target, optional

Actually there are two ways you can use this method:

  • Either by providing the augeas path to the node’s label and let lens and file empty. ** this way augeas will load the common files and lens automatically

  • Or by using a given file path and a specific lens. better performances since only one lens is loaded support custom lens, support custom paths

This mechanism is the same as in the file_augeas_set method.

With autoload

Let’s consider that you want to obtain the value of the ip address of the first line in the /etc/hosts:

(Note that the label and value parameters mentioned are naming examples of variable prefix and variable name, the augeas path /etc/hosts/1/ipaddr represents the ipaddr node label’s value (in the augeas mean) in the first line of the file /etc/hosts).

variable_string_from_augeas("label","value","/etc/hosts/1/ipaddr", "", "");
Without autoload

Here we want the same information as in the first example, but we will force the lens to avoid loading unnecessary files.

variable_string_from_augeas("label","value","/etc/hosts/1/ipaddr","Hosts","/etc/hosts");
Difference with file augeas command

This method is very similar to the file augeas command one, both execute an augtool command an dump its output in a rudder variable. But their goal is really different:

  • This one will parse the output of the augeas print that we want to make it directly usable, but will be less flexible in its input.

  • The file augeas command offers much more possibilities to execute an augeas command to modify a file, but the output will be unparsed and most likely unusable as a rudder variable, expect to dump an error or configuration somewhere.


string_default [unix]

Define a variable from another variable name, with a default value if undefined

variable(prefix, name).string_default(source_variable, default_value)
  • default_value: The default value to use if source_variable is not defined

  • source_variable: The source variable name

To use the generated variable, you must use the form ${prefix.name} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.


string [windows, unix]

Define a variable from a string parameter

variable(prefix, name).string(value)
  • value: The variable content

To use the generated variable, you must use the form ${prefix.name} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.


iterator_from_file [unix]

Define a variable that will be automatically iterated over

variable(prefix, name).iterator_from_file(file_name, separator_regex, comments_regex)
  • comments_regex: Regular expression that is used to remove comments ( usually: \s*#.*?(?=\n) )

  • file_name: The path to the file

  • separator_regex: Regular expression that is used to split the value into items ( usually: \n )

The generated variable is a special variable that is automatically iterated over. When you call a generic method with this variable as a parameter, n calls will be made, one for each items of the variable. Note: there is a limit of 10000 items Note: empty items are ignored

To use the generated variable, you must use the form ${prefix.name} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.


iterator [unix]

Define a variable that will be automatically iterated over

variable(prefix, name).iterator(value, separator)
  • separator: Regular expression that is used to split the value into items ( usually: , )

    • can contain only white-space chars

  • value: The variable content

The generated variable is a special variable that is automatically iterated over. When you call a generic method with this variable as a parameter, n calls will be made, one for each items of the variable. Note: there is a limit of 10000 items

To use the generated variable, you must use the form ${prefix.name} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.


dict_merge_tolerant [unix]

Define a variable resulting of the merge of two other variables, allowing merging undefined variables

variable(prefix, name).dict_merge_tolerant(first_variable, second_variable)
  • first_variable: The first variable, which content will be overridden in the resulting variable if necessary (written in the form prefix.name)

  • second_variable: The second variable, which content will override the first in the resulting variable if necessary (written in the form prefix.name)

To use the generated variable, you must use the form ${prefix.name[key]} with each name replaced with the parameters of this method.

See variable_dict_merge for usage documentation. The only difference is that this method will not fail if one of the variables do not exist, and will return the other one. If both are undefined, the method will still fail.


dict_merge [windows, unix]

Define a variable resulting of the merge of two other variables

variable(prefix, name).dict_merge(first_variable, second_variable)
  • first_variable: The first variable, which content will be overridden in the resulting variable if necessary (written in the form prefix.name)

  • second_variable: The second variable, which content will override the first in the resulting variable if necessary (written in the form prefix.name)

To use the generated variable, you must use the form ${prefix.name[key]} with each name replaced with the parameters of this method.

The resulting variable will be the merge of the two parameters, which means it is built by:

  • Taking the content of the first variable

  • Adding the content of the second variable, and replacing the keys that were already there

It is only a one-level merge, and the value of the first-level key will be completely replaced by the merge.

This method will fail if one of the variables is not defined. See variable_dict_merge_tolerant if you want to allow one of the variables not to be defined.

Usage

If you have a prefix.variable1 variable defined by:

{ "key1": "value1", "key2": "value2", "key3": { "keyx": "valuex" } }

And a prefix.variable2 variable defined by:

{ "key1": "different", "key3": "value3", "key4": "value4" }

And that you use:

variablr_dict_merge("prefix", "variable3, "prefix.variable1", "prefix.variable2")

You will get a prefix.variable3 variable containing:

{
  "key1": "different",
  "key2": "value2",
  "key3": "value3",
  "key4": "value4"
}

dict_from_osquery [unix]

Define a variable that contains key,value pairs (a dictionary) from an osquery query

variable(prefix, name).dict_from_osquery(query)
  • query: The query to execute (ending with a semicolon)

To use the generated variable, you must use the form ${prefix.name[key]} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.

This method will define a dict variable from the output of an osquery query. The query will be executed at every agent run, and its result will be usable as a standard dict variable.

Setup

This method requires the presence of osquery on the target nodes. It won’t install it automatically. Check the correct way of doing so for your OS.

Building queries

To learn about the possible queries, read the osquery schema for your osquery version.

You can test the queries before using them with the osqueryi command, see the example below.

Examples
# To get the number of cpus on the machine
variable_dict_from_osquery("prefix", "var1", "select cpu_logical_cores from system_info;");

It will produce the dict from the output of:

osqueryi --json "select cpu_logical_cores from system_info;"

Hence something like:

[
 {"cpu_logical_cores":"8"}
]

To access this value, use the ${prefix.var1[0][cpu_logical_cores]} syntax.


dict_from_file_type [unix]

Define a variable that contains key,value pairs (a dictionary) from a JSON, CSV or YAML file

variable(prefix, name).dict_from_file_type(file_name, file_type)
  • file_name: The file name to load data from

  • file_type: The file type, can be "JSON", "CSV", "YAML" or "auto" for auto detection based on file extension, with a fallback to JSON (default is "auto")

    • empty, auto, JSON, YAML, CSV

To use the generated variable, you must use the form ${prefix.name[key]} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive).

This method will load data from various file formats (yaml, json, csv).

CSV parsing

The input file must use CRLF as line delimiter to be readable (as stated in RFC 4180).

Examples
# To read a json file with format auto detection
variable_dict_from_file_type("prefix", "var", "/tmp/file.json", "");
# To force yaml reading on a non file without yaml extension
variable_dict_from_file_type("prefix", "var", "/tmp/file", "YAML");

If /tmp/file.json contains:

{
  "key1": "value1"
}

You will be able to access the value1 value with ${prefix.var[key1]}.


dict_from_file [windows, unix]

Define a variable that contains key,value pairs (a dictionary) from a JSON file

variable(prefix, name).dict_from_file(file_name)
  • file_name: The absolute local file name with JSON content

To use the generated variable, you must use the form ${prefix.name[key]} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.

See variable_dict_from_file_type for complete documentation.


dict [windows, unix]

Define a variable that contains key,value pairs (a dictionary)

variable(prefix, name).dict(value)
  • value: The variable content in JSON format

To use the generated variable, you must use the form ${prefix.name[key]} with each name replaced with the parameters of this method.

Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.


windows

resource windows(component)
  • component: Windows component name

States

hotfix_present [windows]

Ensure that a specific windows hotfix is present from the system.

windows(component).hotfix_present(package_path)
  • package_path: Windows hotfix package absolute path, can be a .msu archive or a .cab file

Ensure that a specific windows hotfix is present from the system.


hotfix_absent [windows]

Ensure that a specific windows hotfix is absent from the system.

windows(component).hotfix_absent()

Ensure that a specific windows hotfix is absent from the system.


component_present [windows]

Ensure that a specific windows component is present on the system.

windows(component).component_present()

Ensure that a specific windows component is present on the system.


component_absent [windows]

Ensure that a specific windows component is absent from the system.

windows(component).component_absent()

Ensure that a specific windows component is absent from the system.