alias rudder-lang
Preface
Language Presentation
This language is not:
-
a general purpose language
-
a Turing-complete language
-
an imperative language
It has no:
-
recursion
-
generator / generic iterator
-
way of looping except on finite list
This language is an Open Source DSL (domain-specific language) targeted at state definition. Everything that is not a state definition is a convenience for easier definition of a state. The compiler is very pedantic to avoid defining invalid states as much as possible.
File extension is rl
for Rudder Language.
Example:
@format=0
@name="Configure NTP"
@description="test"
@version = 0
@parameters=[]
resource Configure_NTP()
Configure_NTP state technique() {
@component = "Package present"
package("ntp").present("","","") as package_present_ntp
}
Once compiled to CFEngine code:
# generated by rudder-lang
# @name Configure NTP
# @description test
# @version 1.0
bundle agent Configure_NTP_technique
{
vars:
"resources_dir" string => "${this.promise_dirname}/resources";
methods:
"Package present_${report_data.directive_id}_0" usebundle => _method_reporting_context("Package present", "ntp");
"Package present_${report_data.directive_id}_0" usebundle => package_present("ntp", "", "", "");
}
Short-term future abilities
-
Generate techniques written in rudder-lang into: DSC
-
Generate techniques written in rudder-lang into: JSON
-
Error feedback directly in the Technique Editor
-
Enhanced (or refactored):
-
Variable handling (for example in conditions)
-
CFEngine generation
-
Long-term future abilities
-
New keywords including the action, measure, function keywords
-
Fully rewrite the ncf library into a self-sufficient rudder-lang library
-
Plain integration and usage of rudder-lang into Rudder whether as code or into the Technique Editor
-
Various improvements and some reworks
Concepts
Resource
-
a resource is an object that sits on the system being configured
-
a resource is defined by a resource type with 0 or more parameters
-
a resource type with 0 parameter defines a unique resource in the system
-
a resource can contain other resources
State
-
a state is an elementary configuration of a resource
-
a state is defined by a name and 0 or more parameter
-
a given resource can have many states at the same time
-
a given resource can only have one state of a given name
-
state application produces a status also named
result
Variables and types
-
configurations can be parametered via constants and variables
-
constants and variables have a type
-
types are distinct from resource and state
-
types are all based on basic types:
integer
,float
,string
,boolean
,array
,hashmap
-
a variable cannot contain a resource or a state
Enums and conditions
-
an enum is an exhaustive list of possible values
-
an enum mapping maps all possible values of an enum to another enum
-
a condition is an enum expression
-
an enum expression is a boolean expression of enum comparison
-
enum comparison compares a variable with an enum value with mapping knowledge
-
state application outcome are enums too
-
a
case
is a list of conditions that must match all possible cases exactly once -
an
if
is a single condition
Lexical structure
File structure:
-
Starts with a header metadata:
@format=X
,X
being the file version -
After the header come declaration and definition of items (see Items definition and declaration patterns)
Keywords
The following keywords currently have the functionality described Sorted by category
-
header:
-
@format ..
, defines the rudder-lang version of the file. Indicates when compiling if a version conversion must be performed beforehand
-
-
enum:
-
enum ..
, a list of values -
global ..
, usage: paired withenum
. Means enum values are unique and can be guessed without specifying a type -
items
(in
), sub-enums ie. extends an existing enum -
alias
, gives an other name to an enum item
-
-
types:
-
string
type definition, cf: String type -
float
type definition, cf: Float type -
integer
type definition, cf: Integer type -
boolean
type definition, cf: Boolean type -
struct
type definition, cf: Struct type -
list
type definition, cf: List type
-
-
let ..
, global variable declaration -
resource ..
, to declare a new resource -
.. state ..
, to define a new state linked to an existing resource -
flow operators:
-
if ..
-
case ..
, list (condition, then)-
default
, calls default behavior of an enum expression. Is mandatory when an enum ends with a*
-
nodefault
can be met in a single-cased case switch
-
-
-
flow statements:
-
fail ..
, to stop engine with a final message -
log_debug ..
, to print debug data -
log_info ..
, to inform the user -
log_warn ..
, to warn the user -
return ..
, to return a specific result -
noop
, do nothing
-
Operators
-
@
declares a metadata which is a key / value pair (syntax is@key=value
). Cf Metadata -
\#
simple comment -
#\#
parsed comment.#\#
comments are considered to be metadata and are parsed and kept. -
|
or,&
and,!
not -
.
item in enum -
..
items range, in enum -
=~
is included or equal,!~
is not included or not equal. Used when comparing enum expressions -
!
Audit state application -
?
Condition state application
Identifiers
Identifiers are variable names given by users, containing only alphanumeric chars.
Identifiers can be:
-
all kind of aliases
-
parameters
-
enum names or enum item names
-
sub-enum names or sub-enum item names
-
metadata names
-
resource names and resource reference names
-
state names
-
variable names
-
agent variable names and values
Identifiers can be invalid. They cannot be:
-
an invalid namespace (declared variables), including CFEngine core variables (see file libs/cfengine_core.rl)
-
the name of a type
-
"string"
-
"num"
-
"boolean"
-
"struct"
-
"list"
-
-
a reserved keyword in the language (see keywords)
-
a reserved keyword for future usage
-
"format"
-
"comment"
-
"dict"
-
"json"
-
"enforce"
-
"condition"
-
"audit"
-
"let"
-
An invalid variable is:
-
invalid identifiers
-
enum names
-
global enum item names
-
resource names
-
"true"
/"false"
Comments
There are two kind of comments:
-
simple comments
#
that are not parsed and not stored. They are comments in the common sense : only useful for the developer from inside the .rl file -
parsed comments
##
that are considered to be metadata. They are parsed and stored as such, and will later be used by the compiler
Metadata
Metadata allow to extend the language and the generation process and give the user the ability to store structured data with resources. Hence metadata that can be anything available in the language
Types
string
rudder-lang supports multiline string format, interpolation, escaping sequences:
-
an escaped string is delimited by
"
-
an unescaped string is delimited by
"""
-
interpolation has the following syntax:
${…}
-
supported escape sequences:
\\
,\n
,\r
,\t
integer
Internally represented by a 64-bit signed integer.
float
Internally represented by a double-precision 64-bit floating point number.
boolean
true
or false
Internally represented by the boolean
exhaustive enum
struct
Structs are delimited by curly braces {…}
, are composed by pairs of key: value
and use commas (,
) as separators
list
Lists are delimited by square brackets […]
and use commas (,
) as separators
Items
An item is a component of rudder-lang.
As explained in a previous chapter after the header come declaration and definition of rudder lang items.
Note
|
- Item declaration means informing the compiler something now exists, with a given name and optionally type - Item definition means giving the said variable a value |
Before defining variables, an overview of the language keywords as well as a better understanding of types, operators and enums is required<.
Local vs global scope
Some items are considered to be declared or defined globally
Two items cannot have the same name in a given scope. It implies that no local variable can be defined with the same name than a global item since the latter is by definition available from every single scope
Declaration and definition patterns
Most item definition patterns look like:
# comment # optional
@metadata="value" # optional
type identifier(parameter_list) # lists are wrapped into `[` or `{` or `(`
Unless specified otherwise, comments and metadata are allowed.
List of possible definitions:
-
enum
-
enum definition
-
sub-enum definition
-
enum alias is a declaration not a definition
-
-
resource definition
-
state definition
-
variable (global) definition
-
alias definition
-
agent variable (global) is a declaration not a definition
Note
|
an identifier (abbr: ident) is a word composed of alphanumeric characters and underscores (_ ). All variable names, parameters, enum fields, aliases, are parsed as identifiers.
|
Note
|
a value can be any rudder-lang type |
definition: enum
An enum is a list of values, a bit like a C enum. See enums to have an understanding on how rudder-lang enums work.
Examples:
Exhaustive enum:
enum boolean {
true,
false
}
Global, non-exhaustive enum:
global enum system {
windows,
linux,
aix,
bsd,
hp_ux,
solaris,
*
}
definition: sub-enum
Sub enums extend an existing enum item, adding it children
Note
|
sub-enums derived from global enum inherit the global property
|
items in aix {
aix_5,
aix_6,
aix_7,
*
}
Items can have sub-enums on their own
items in debian_family {
@cfengine_name="(debian.!ubuntu)"
debian,
ubuntu,
# Warning: update debian if you make change here
*
}
Note that each element can be supplemented by a comment or metadata.
declaration: enum alias
Can be defined aliases of enum items
Enum alias:
enum alias ident = enum_name
Enum item alias:
enum alias ident = enum_name.item
definition: resource
A resource can also be defined with a parent linking: resource ident(p1, p2): ident
resource ident(p1, p2)
definition: state
A state extends a resource and is private to the resource context
State definition model:
resource_name state state_name(p1, p2) {
# statements
}
Read more about statements here
Examples:
Configure_NTP state technique() {
@component = "Package present"
package("ntp").present("","","") as package_present_ntp
}
The Configure_NTP
is extended by a new state called technique
, receiving no parameters since its content (called statement) does not require any
Another example to illustrate parametered states:
@metadata="value"
ntp state configuration (to_log="file is absent")
{
file("/tmp").absent() as abs_file
if abs_file =~ kept => log "info: ${to_log}"
}
In the above example there is a local state declaration and a condition leading to an action
Note
|
state declaration is always part of a statement whereas state definition is a top level feature |
declaration & definition: variable
No comment or metadata allowed
Variables are declared using the let
keyword and optionally define them inline
let ident = "value"
or let my_var = other_var
or any type rudder-lang handles
Declaration of namespaces is possible:
let namespace1.namespace2.ident
definition: alias
Aliases allow to have elements of the resource and state pair both renamed
Example:
alias resource_alias().state_alias() = resource().state()
Enums
Enums are not properly rudder-lang types yet they are a full feature and have a defined syntax
enum
vs global enum
An enum can be global. There are some key differences:
difference |
enum |
global enum |
globally unique items |
no |
yes |
enum type must be specified at call time |
yes |
no |
each item has a global variable associated* |
no |
yes |
All item names of a global enum are globally unique and usable as is, meaning it becomes reserved, no other variable can be created with this name.
In other words, item names of global enums are directly available in the global namespace.
# arbitrary system list
global enum system {
windows,
linux
}
To call an item of this enum, just type linux
(rather than system.linux
) as it exists in the global namespace.
Still it remains different from a single variable since internally a reference to the enum tree is kept
Access to enum content
It is possible to access an enum item or range of items.
Note
|
enum ranges are not sorted therefore range order is the same as enum definition order |
Depending on the enum being global or not, it is possible to directly call items, since global enums declare a variable for each of its items
-
item:
enum.item
oritem
if the enum is global -
range: is expressed this way:
-
enum_item..
-
enum_item..enum_item2
-
..enum_item
-
Example:
# arbitrary system list that is not global
enum system {
windows,
linux,
aix,
bsd
}
`if linux =~ system.linux` # is true
`if linux =~ ..system.windows` # is false
`if windows !~ system.linux..system.bsd` # is true
`if aix =~ system.linux..` # is true
Statements and Expressions
Statements
Statements are an important concept that can be expressed by either:
-
comments and metadata
-
a state declaration:
resource().state() as mystatedecl
to store a state into a local variable that can be called -
a variable definition:
let myvar = "value"
. Value can be of any primitive type -
a (switch)
case
. cf case conditions -
an
if
condition that contains an enum expression:if expr ⇒ statement
. cf if conditions -
a flow statement:
return
log_debug
log_info
log_warn
fail
noop
Example of a state definition that exposes every statement type:
@format=0
resource deb()
deb state technique()
{
# list of possible statements
@info="i am a metadata"
let rights = "g+x"
permissions("/tmp").dirs("root", "x$${root}i${user}2","g+w") as outvar
if outvar=~kept => return kept
case {
outvar=~repaired => log_info "repaired",
outvar=~error => fail "failed agent",
default => log_info "default case"
}
}
if
conditions
enum range or item access explained here access to enum content
syntax: if expression ⇒ statement
case
conditions
Work the same way switch cases do in other languages
Syntax:
case {
case_expression => statement,
default => statement ## optional unless enum is global
}
case
expressions are mostly standard expressions so they handle &
, |
, !
, (..)
, default
the same way.
Only difference being cases
have an additional nodefault
expression that silently comes with a noop
statement
Expressions
Expressions are composed of boolean expressions based on enum comparison
Their purpose is to check whether the variable is of the right type and contains the provided item as a value, or an ancestor item if there is any
NOTE: default
is a value that is equivalent of true
Expressions are a composition of the following elements:
-
or:
expr | expr
-
and:
expr & expr
-
not:
!expr
-
parentheses:
(expr)
to handle priorities between expressions -
default:
default
keyword that automatically comes with anoop
statement -
expression comparison:
-
var =~ enum
variable is equivalent to enum range or item -
var !~ enum
variable is not equivalent to enum range or item -
implicit boolean comparison that only takes an expression (for example
!linux & !windows
)
-
Note
|
see enum related syntax here, including items and range and expression examples |
Appendices
Libraries
stdlib
What is called stdlib (rudder-lang own standard library) is the following set of files.
Folder: ./libs
:
resourcelib.rl
Contains the list of available methods. A method is composed of:
-
a resource
-
its relative states
-
the resource parameters
-
states own parameters
corelib.rl
Contains standard enums available to rudder-lang users like the general purpose enums boolean
and result
oslib.rl
Contains the exhaustive list of supported operating systems, including major and minor versions.
Stored in the form of nested `enum`s.
More about supported OSes and their usage in rudder-lang here: supported OSes
cfengine_core.rl
Contains the exhaustive list of CFEngine reserved words and namespaces.
Required since these words cannot be created by rudder-lang to avoid any conflicts with CFEngine.
Operating systems
Since rudder-lang is a language designed to configure servers, operating systems are an important part of it
A part of the stdlib is dedicated to declare a structured list of handled operating systems in the form of enums
This chapter is about explaining how to use it
Syntax
OS list construction
-
Underscore is used as a separator
_
-
4 accuracy layers, all following this syntax rule:
-
system
→linux
, etc -
os
→ubuntu
, etc -
os_major
→ubuntu_20
, etc -
os_major_minor
→ubuntu_20_04
, etc
-
Language syntax
Since rudder-lang OS list is composed of enums
, it meets the requirements that are specific to enums
:
-
a top layer, that is the
global enum system
-
sub-enums that expand their parent item:
items in linux
oritems in ubuntu_20
-
aliases can be used to define any sub-enum, like:
enum alias focal = ubuntu_20_04
More about enums
Usage
rudder-lang makes use of an exhaustive list of operating systems, including major and minor versions.
This list is defined in the stdlib (more about it here)
For now they are used in conditions to check whether a method should be applied or not.
Several degrees of accuracy can be chosen when defining a condition:
-
system (kernel):
windows
,linux
,bsd
, etc -
operating system:
ubuntu
,windows_server
, etc -
major version: for example,
ubuntu_20
-
minor version: for example,
ubuntu_20_04
Yet any sub-enum is a standalone, meaning it is directly defined on its own: ubuntu_20
.
Note
|
The fact ubuntu_20 is part of ubuntu → linux → system is only important for accuracy sake: if linux evaluates to true on ubunutu_20
|
Example with ubuntu_20_10
as the targetted OS
The following expressions will be evaluated to true
:
-
if linux
-
if ubuntu
-
if ubuntu_20
The following expressions will be evaluated to false
:
-
if windows
-
if ubuntu_20_04
Rudder-lang usage
There are two ways to interact with rudder-lang: directly from the terminal or through the Technique Editor
Using the command line interface (CLI)
Installation
rudder-lang program is called rudderc, standing for Rudder Compiler
To start working with rudder-lang, install a beta agent (see rudder agent installation (debian), other OSes guides available)
rudderc being a part of the agent, it is now installed at the following location: /opt/rudder/bin/rudderc
Optionally add rudderc to your path export PATH=$PATH:/opt/rudder/bin/rudderc
to simply run it with the following command: rudderc
Usage
rudderc
has 4 features, called commands that will generate code. The command you need fully depend on the format you have and the output format you want:
-
compile: generates either a DSC / CFEngine technique from a RudderLang technique
-
save: generates a RudderLang technique from a JSON technique (same object format Technique Editor produces)
-
technique read: generates a JSON technique (same object format Technique Editor produces) from a RudderLang technique
-
technique generate: generates a JSON object that comes with RudderLang + DSC + CFEngine technique from a JSON technique It is worth noting that
--stdin
and--stdout
options are default behavior fortechnique generate
andtechnique read
JSON output ( which includes logs) is handled. It is optional for compile and save but is default behavior for technique read
and technique generate
.
By default all logs but error
are printed to STDOUT
. error
logs are printed to STDERR
.
rudderc --help
or rudderc -h
output slightly modified)Rudderc (4) available commands are callable through subcommands, namely <technique read>, <technique generate>, <save>, <compile>, allowing it to perform generation or translation from / into the following formats : [JSON, RudderLang, CFengine, DSC]. Run `rudderc <SUBCOMMAND> --help` to access its inner options and flags helper Example: rudderc technique generate -c confs/my.conf -i techniques/technique.json -f rudderlang USAGE: rudderc <SUBCOMMAND> FLAGS: -h, --help Prints help information -V, --version Prints version information SUBCOMMANDS: compile Generates either a DSC / CFEngine technique (`--format` option) from a RudderLang technique help Prints this message or the help of the given subcommand(s) save Generates a RudderLang technique from a CFEngine technique technique A technique can either be used with one of the two following subcommands: `read` (from rudderlang to json) or `generate` (from json to cfengine or dsc or rudderlang)
rudderc commands all share several flags and options .Shared FLAGS / OPTIONS:
FLAGS: -b, --backtrace Generates a backtrace in case an error occurs -h, --help Prints help information --stdin Takes stdin as an input rather than using a file. Overwrites input file option --stdout Takes stdout as an output rather than using a file. Overwrites output file option. Dismiss logs directed to stdout. Errors are kept since they are printed to stderr -V, --version Prints version information OPTIONS: -c, --config-file <config-file> Path of the configuration file to use. A configuration file is required (containing at least stdlib and generic_methods paths) [default: /opt/rudder/etc/rudderc.conf] -i, --input <file> Input file path. If option path does not exist, concat config input with option. -l, --log-level <log-level> rudderc output logs verbosity [default: warn] [possible values: off, trace, debug, info, warn, error] -o, --output <file> Output file path. If option path does not exist, concat config output with option. Else base output on input.
But some commands come with their own flags and options (listed below) on top of the previously mentioned:
Generates a JSON object that comes with RudderLang + DSC + CFEngine technique from a JSON technique USAGE: rudderc compile [FLAGS] [OPTIONS] FLAGS: -j, --json-logs Use json logs instead of human readable output This option will print a single JSON object that will contain logs, errors and generated data (or the file where it has been generated). Whichever command is chosen, JSON output format is always the same. However, some fields (data and destination file) could be set to `null`, make sure to handle `null`s properly Note that NO_COLOR specs apply by default for json output. Also note that setting NO_COLOR manually in your env will also work OPTIONS: -f, --format <format> Enforce a compiler output format (overrides configuration format). [possible values: cf, cfengine, dsc, json] ...
Generates a RudderLang technique from a CFEngine technique USAGE: rudderc save [FLAGS] [OPTIONS] FLAGS: -j, --json-logs Use json logs instead of human readable output This option will print a single JSON object that will contain logs, errors and generated data (or the file where it has been generated). Whichever command is chosen, JSON output format is always the same. However, some fields (data and destination file) could be set to `null`, make sure to handle `null`s properly Note that NO_COLOR specs apply by default for json output. Also note that setting NO_COLOR manually in your env will also work
read
is a subcommand of the technique
subcommand):Generates a JSON technique from a RudderLang technique USAGE: rudderc technique read [FLAGS] [OPTIONS] ...
generate
is a subcommand of the technique
subcommand):Generates a JSON object that comes with RudderLang + DSC + CFEngine technique from a JSON technique USAGE: rudderc technique generate [FLAGS] [OPTIONS] ...
Most options are pretty straightforward but some explanations might help:
-
Flags and options must be written in
kebab-case
-
A configuration file is required because rudderc needs its own libraries to work (default path should point to an already working Rudder configuration if rudder agent was installed like previously suggested)
-
Configuration can define flags and options but CLI will always overwrite config defined ones. ie: CLI
--output
> configoutput
-
--stdin
>--input
-
--stdout
> --output >input
as destination with updated extension -
--format
>--output
technique extension -
--log-levels
are ordered (trace > debug > info > warn > error) which meansinfo
includeswarn
anderror
-
--stdin
is designed to work with pipes (ex:cat file.rl
| rudderc compile -c file.conf -f cf`), it won’t wait for an input. Higher priority than--input
option -
--stdout
will dismiss any kind of logs, including errors. Only thing that will be printed to terminal is the expected result. If empty, try again with a log, there is an error. Higher priority than--output
option
Options: how are input, output and format dealt with:
Internally for input the compiler looks for an existing file until it founds one, in the following order: * solely from the CLI input option * join configuration input as dir + CLI input option * solely from the configuration input (if the file exists) * if none worked, error
Internally for output, the compiler looks for an existing path to write a file on, until it founds one: * solely from the CLI output option * join configuration output as dir + CLI output option * solely from the configuration output * uses input and only updates the extension * if none worked, error
Internally for format when required (compile
):
* for any command but compile
, format is set by the program
* compile command: explicit CLI --format
option. Note that values are limited.
* compile command: output file extension is used
* if none worked, error
Configuration file
A configuration file is required because rudderc needs its own libraries to work.
Entire rudder-lang environment is already set up alongside the agent: this includes all needed libraries and a configuration file with preset paths.
[shared]
stdlib="libs/"
cfengine_methods="repos/ncf/tree/30_generic_methods/"
alt_cfengine_methods="repos/dsc/plugin/ncf/30_generic_methods/"
dsc_methods="repos/dsc/packaging/Files/share/initial-policy/ncf/30_generic_methods/"
[compile]
input="tests/techniques/simplest/technique.rl"
output="tests/techniques/simplest/technique.rl.cf"
[save]
input="tests/techniques/simplest/technique.cf"
output="tests/techniques/simplest/technique.cf.rl"
[technique_read]
input="tests/techniques/simplest/technique.rl"
output="tests/techniques/simplest/technique.rl.json"
[technique_generate]
input="tests/techniques/simplest/technique.json"
output="tests/techniques/simplest/technique_array.json"
[testing_loop]
cfengine="/opt/rudder/bin/cf-promises"
ncf_tools="repos/ncf/tools/"
py_modules="tools/"
The configuration file can be used to shorten arguments.
There is a table for each command (compile
, technique_read
, technique_generate
, save
), that can hold their own two limited fields: input
and output
.
Meaningful usage is that these two fields are paths that are completed by CLI filenames: --input <file>
/ --output <file>
CLI options.
In other words: config options are paths (directories), to which is joined the cli option.
But configure it using a file and not use the CLI options will work.
Compilation example
-
Required: a config file to work on a local environment:
[shared]
stdlib="libs/"
cfengine_methods="repos/ncf/tree/30_generic_methods/"
alt_cfengine_methods="repos/dsc/plugin/ncf/30_generic_methods/"
dsc_methods="repos/dsc/packaging/Files/share/initial-policy/ncf/30_generic_methods/"
-
CLI full version
rudderc compile --json-log --log-level debug --config-file tools/my.conf --input tests/techniques/technique.rl --output tests/techniques/technique.rl.dsc --format dsc
-
CLI shortened version
rudderc compile -j -l debug -c tools/my.conf -i tests/techniques/technique.rl -f dsc
What it means:
-
Compiles
tests/techniques/technique.rl
(-i
) intotests/techniques/technique.rl.dsc
(output based on input), -
Use the configuration file located at
./tools/my.conf
(-c
), -
Output technique format is DSC (
--format
). Note that this parameter is optional since-d
defines the right technique format by its extension -
Output log format is JSON (
-j
), -
The following log levels: error, warn, info, debug will be printed to the terminal
-
CLI + config shortened version
-
By using an adapted configuration file, it can be simplified:
[shared]
stdlib="libs/" # only required field for rudderc
[compile]
input="tests/techniques/"
output="tests/techniques/"
Lightest compilation using CLI.
rudderc -j -l debug -c tools/myconf -i technique.rl
Input will be a concatenation of config and cli: tests/techniques/technique.rl
. Output still based on input.
-
config + CLI shortest version
By using an adapted configuration file, it can be simplified:
[shared]
stdlib="libs/" # only required field for rudderc
[compile]
input="rl/technique.rl"
output="dsc/technique.rl.dsc"
Lightest compilation using CLI.
rudderc -j -l debug -c tools/myconf
JSON Output
If you decided to go with the --json-output
option, it means output will consist of a single JSON object:
{
"command": "compile",
"time": "1600331631367",
"status": "success",
"source": "tests/techniques/simplest/technique.rl",
"logs": [],
"data": [
{
"format": "DSC",
"destination": "tests/techniques/6.1.rc5/technique.dsc",
"content": null
}
],
"errors": []
}
-
Output always use the same squeleton which is the one you just read.
-
data
field:-
Length always 0 in case of error # TODO check for technique generate
-
Length always 3 when
technique generate called
-
Length always 1 in any other case since other commands only generate 1 format
-
-
content
field is null if its content has succesfully been written to a file -
destination
field is null if content is directly written in the JSON -
errors
field is an array of strings # TODO log field
Using the Technique Editor
Since rudder-lang has not been released yet, it is accessible from the 6.1 beta version (and later)
RL is called from the Technique Editor as a backend program every time a technique is saved. For now it only is a testing loop. Once fully released, every technique will directly be saved using rudder-lang
Note
|
This testing loop generates two CFEngine techniques, one using the usual ncf framework and an other one using rudder-lang. The two are then compared. |
Since the Technique Editor is meant to simplify methods generation no rudder-lang code is written (the language is fully abstracted). It is used as an internal CFEngine generator
Integration to Rudder
Integration to Rudder
Right now rudder-lang is in an alpha state. It is deployed but not properly released: rudderc is already part of Rudder, but is not visible to the user.
What it means is neither available nor user techniques are generated using rudder-lang yet.
Testing loop
Yet rudder-lang is in fact called, but only for its own testing purpose.
Every time a technique is saved from the Technique Editor (which outputs a cfengine technique), a script does the following using the generated technique:
-
generates in a temporary folder cf json and rl files by calling libraries and rudderc
-
compares the original files with rudderc generated files
-
differences and potential compilation / scripting errors are logged to
/var/log/rudder/rudder-lang/${technique_id}/*
.
How to: disable automated usage of rudder-lang testing loop
Open the following file: /opt/rudder/etc/rudder-web.properties
and set rudder.lang.test-loop.exec=true
to false
.
rudderc won’t be called anymore until reactivation
How to: manual testing
It is possible to directly take a look at the techniques generated via rudder-lang, by calling the testing loop on your own for the Rudder server command line.
The testing script location is /opt/rudder/share/rudder-lang/tools/tester.sh
. The script takes the following parameters:
-
An optional
--keep
parameter that forces techniques generated via rudder-lang to be saved in a folder located in/tmp
(a new folder is generated for every loop). The folder is actually echoed by the script so you can look at its content. -
The mandatory: technique name (actually the Technique ID, can be found in the Technique Editor)
-
The mandatory: technique category under the form of a path. For example, default category is
ncf_techniques
.
The idea is that with those 2 mandatory parameters, the script can make up the technique correct path to the directory holding the technique:
/var/rudder/configuration-repository/techniques/${technique_category}/${technique_id}/1.0/
Example:
To test the new_tech
technique, located at: /var/rudder/configuration-repository/techniques/systemSettings/networking/new_tech/1.0/technique(.cf)
$ /opt/rudder/share/rudder-lang/tools/tester.sh --keep new_tech systemSettings/networking
Done testing in /tmp/tmp.LFA6XjxkuF
$ ls -1 /tmp/tmp.LFA6XjxkuF [±ust_17738/add_doc_about_logs_and_generated_techniques ●]
new_tech.json # generated by ncf
new_tech.rl # generated by rudderc --translate
new_tech.rl.cf # generated by rudderc --compile
new_tech.rl.cf.json # generated by ncf (for now)
$ ls -1 /var/log/rudder/rudder-lang/new_tech/ # generated logs for the technique testing loop (explicit names)
compare_json.log ### these logs hold any error or difference that is unexpected
compare_cf.log ### if no error / difference is found, the log file will not be generated
rudderc_compile.log
rudderc_translate.log
Standard library
By default, resource and state parameters:
-
cannot be empty
-
cannot contain only white-spaces
-
have a max size of 16384 chars
Exceptions are explicitly specified in the doc.
States marked as actions represent actions that will be executed at every run. You should generally add a condition when using them.
command
resource command(command)
command
Command to run
States
execution_result [unix]
Execute a command and create result conditions depending on its exit code
command(command).execution_result(kept_codes, repaired_codes)
Execute a command and create result conditions depending on the exit codes given in parameters. If an exit code is not in the list it will lead to an error status. If you want 0 to be a success you have to list it in the kept_codes list
kept_codes
List of codes that produce a kept status separated with commas (ex: 1,2,5)
repaired_codes
List of codes that produce a repaired status separated with commas (ex: 3,4,6)
execution_once [unix]
Execute a command only once on a node
command(command).execution_once(ok_codes, until, unique_id)
This method is useful for specific commands that should only be executed once per node.
If you can spot a condition for the command execution by testing the
state of its target, it is better to use the condition_from_command
method to test the state coupled with the command_execution_result
method to run the command if necessary.
The method will:
Define the command_execution_once_${command}_kept
condition and do
nothing if a command_execution_once
has already been executed on this
machine with the same Unique id.
Execute the command if it is the first occurrence and:
-
If the parameter Until is
any
, it will consider the command as executed on the machine and define either:-
command_execution_once_${command}_repaired
if the return code is in ok_codes, -
command_execution_once_${command}_error
otherwise.
-
-
If the parameter Until is ok and:
-
If the return code is in the Ok codes list, define the
command_execution_once_${command}_repaired
condition -
If the return code is not in Ok codes it define the
command_execution_once_${command}_error
condition and retry at next agent run.
-
If an exit code is not in the list it will lead to an error status. If you want "0" to be a success you have to list it in the Ok codes list
Example:
If you use:
command_execution_once("command -a -t", "0", "ok", "my_program_setup")
It will retry to run command -a -t
until it returns "0". Then it will
not execute it again.
ok_codes (can be empty)
List of codes that produce a repaired status separated with commas (ex: 1,2,5). Defaults to 0.
unique_id (can be empty)
To identify the action without losing track if the command changes. Defaults to the command if you don’t need it.
until (empty, any
, ok
)
Try to execute the command until a particular state: 'ok', 'any' (defaults to 'any')
execution [windows, unix]
Execute a command
command(command).execution()
Execute the Command in shell. On DSC agent, the Command in executed
through the Powershell &
operator.
The method status will report:
-
a Repaired if the return code is "0",
-
an Error if the return code is not "0"
condition
resource condition(condition_prefix)
condition_prefix
The condition prefix
States
once [unix]
Create a new condition only once
condition(condition_prefix).once()
This method define a condition named from the parameter Condition when it is called for the first time. Following agent execution will not define the condition.
This allows executing actions only once on a given machine. The created condition is global to the agent.
Example:
If you use:
condition_once("my_condition")
The first agent run will have the condition my_condition
defined,
contrary to subsequent runs for which no condition will be defined.
See also : command_execution_once
from_variable_match [windows, unix]
Test the content of a string variable
condition(condition_prefix).from_variable_match(variable_name, expected_match)
Test a variable content and create conditions depending on its value:
-
If the variable is found and its content matches the given regex:
-
a
${condition_prefix}_true
condition, -
and kept outcome status
-
-
If the variable is found but its content does not match the given regex:
-
a
${condition_prefix}_false
condition, -
and a kept outcome status
-
-
If the variable can not be found:
-
a
${condition_prefix}_false
condition -
and an error outcome status
-
/!\ Regex for unix machine must be PCRE compatible and those for Windows agent must respect the .Net regex format.
-
If you want to test a technique parameter, use the
technique_id
of the technique as variable prefix and the`parameter_name` as variable name.
expected_match
Regex to use to test if the variable content is compliant
variable_name
Complete name of the variable being tested, like my_prefix.my_variable
from_variable_existence [windows, unix]
Create a condition from the existence of a variable
condition(condition_prefix).from_variable_existence(variable_name)
This method define a condition:
-
{condition_prefix}_{variable_name}_true
if the variable named from the parameter Variable name is defined -
{condition_prefix}_{variable_name}_false
if the variable named from the parameter Variable name is not defined
Also, this method always result with a success outcome status.
variable_name
Complete name of the variable being tested, like my_prefix.my_variable
from_expression_persistent [unix]
Create a new condition that persists across runs
condition(condition_prefix).from_expression_persistent(condition_expression, duration)
This method evaluates an expression (=condition combination), and
produces a ${condition_prefix}_true
or a ${condition_prefix}_false
condition depending on the result on the expression, which will lasts
for the Duration time:
-
This method always result with a success outcome status
-
If the expression evaluation results in a "defined" state, this will define a
${condition_prefix}_true
condition, -
If the expression evaluation results in an "undefined" state, this will produce a
${condition_prefix}_false
condition.
Calling this method with a condition expression transforms a complex expression into a single class condition.
The created condition is global to the agent and is persisted across runs. The persistence duration is controlled using the parameter Duration which defines for how long the target condition will be defined (in minutes). Note that there is no way to persist indefinitely.
Example:
If you want to check if a condition evaluates to true, like checking that you are on Monday, 2am, on RedHat systems, and make it last one hour you can use the following policy
condition_from_expression_persistent_("backup_time", "Monday.redhat.Hr02", "60")
The method will define:
-
In any case:
-
condition_from_expression_persistent_backup_time_kept
-
condition_from_expression_persistent_backup_time_reached
-
-
And:
-
backup_time_true
if the system is a RedHat like system, on Monday, at 2am, and will persist for Duration minutes, -
backup_time_false
if the system not a RedHat like system, or it’s not Monday, or it’s not 2am -
no extra condition if the expression is invalid (cannot be parsed)
-
Notes:
Rudder will automatically "canonify" the given Condition prefix at
execution time, which means that all non [a-zA-Z0-9_]
characters will
be replaced by an underscore.
condition_expression
The expression evaluated to create the condition (use 'any' to always evaluate to true)
duration
The persistence suffix in minutes
from_expression [unix]
Create a new condition
condition(condition_prefix).from_expression(condition_expression)
This method evaluates an expression, and produces a
${condition_prefix}_true
or a ${condition_prefix}_false
condition
depending on the result of the expression evaluation:
-
This method always result with a success outcome status
-
If the evaluation results in a "defined" state, this will define a
${condition_prefix}_true
condition, -
If the evaluation results in an "undefined" state, this will produce a
${condition_prefix}_false
condition.
Calling this method with a condition expression transforms a complex expression into a single condition.
The created condition is global to the agent.
Example
If you want to check if a condition evaluates to true, like checking that you are on Monday, 2am, on RedHat systems, you can use the following policy
condition_from_expression("backup_time", "Monday.redhat.Hr02")
The method will define:
-
In any case:
-
condition_from_expression_backup_time_kept
-
condition_from_expression_backup_time_reached
-
-
And:
-
backup_time_true
if the system is a RedHat like system, on Monday, at 2am. -
backup_time_false
if the system not a RedHat like system, or it’s not Monday, or it’s not 2am -
no extra condition if the expression is invalid (cannot be parsed)
-
Notes:
Rudder will automatically "canonify" the given Condition prefix at
execution time, which means that all non [a-zA-Z0-9_]
characters will
be replaced by an underscore.
condition_expression
The expression evaluated to create the condition (use 'any' to always evaluate to true)
from_command [windows, unix]
Execute a command and create result conditions depending on its exit code
condition(condition_prefix).from_command(command, true_codes, false_codes)
This method executes a command, and defines a ${condition_prefix}_true
or a ${condition_prefix}_false
condition depending on the result of
the command:
-
If the exit code is in the "True codes" list, this will produce a kept outcome and a
${condition_prefix}_true
condition, -
If the exit code is in the "False codes" list, this will produce a kept outcome and a
${condition_prefix}_false
condition, -
If the exit code is not in "True codes" nor in "False codes", or if the command can not be found, it will produce an error outcome and and no condition from
${condition_prefix}
The created condition is global to the agent.
Example:
If you run a command /bin/check_network_status
that output code 0, 1
or 2 in case of correct configuration, and 18 or 52 in case of invalid
configuration, and you want to define a condition based on its execution
result, you can use:
condition_from_command("network_correctly_defined", "/bin/check_network_status", "0,1,2", "18,52")
-
If the command exits with 0, 1 or 2, then it will define the conditions
-
network_correctly_defined_true
, -
condition_from_command_network_correctly_defined_kept
, -
condition_from_command_network_correctly_defined_reached
,
-
-
If the command exits 18, 52, then it will define the conditions
-
network_correctly_defined_false
, -
condition_from_command_network_correctly_defined_kept
, -
condition_from_command_network_correctly_defined_reached
-
-
If the command exits any other code or is not found, then it will define the conditions
-
condition_from_command_network_correctly_defined_error
, -
condition_from_command_network_correctly_defined_reached
-
Notes:
-
In audit mode, this method will still execute the command passed in parameter. Which means that you should only pass non system-impacting commands to this method.
-
Rudder will automatically "canonify" the given Condition prefix at execution time, which means that all non
[a-zA-Z0-9_]
characters will be replaced by an underscore.
command
The command to run
false_codes
List of codes that produce a false status separated with commas (ex: 3,4,6)
true_codes
List of codes that produce a true status separated with commas (ex: 1,2,5)
directory
resource directory(target)
target
Directory to remove
States
present [windows, unix]
Create a directory if it doesn’t exist
directory(target).present()
check_exists [unix]
Checks if a directory exists
directory(target).check_exists()
This bundle will define a condition
directory_check_exists_${directory_name}_{ok, reached, kept}
if the
directory exists, or
directory_check_exists_${directory_name}_{not_ok, reached, not_kept, failed}
if the directory doesn’t exists
absent [windows, unix]
Ensure a directory’s absence
directory(target).absent(recursive)
If recursive
is false, only an empty directory can be deleted.
recursive (can be empty)
Should deletion be recursive, "true" or "false" (defaults to "false")
dsc
resource dsc(tag)
tag
Name of the configuration, for information purposes
States
from_configuration [windows]
Compile and apply a given DSC configuration defined by a ps1 file
dsc(tag).from_configuration(config_file)
Compile and apply a given DSC configuration The DSC configuration is within a .ps1 file, and this script is expected to finish by a directive to compile it. A configuration data file containing variables (.psd1) can also be used by the ps1 script, by refering to it in the Configuration directive
Example 1 - without external data
EnsureWebServer.ps1 Configuration EnsureWebServer { Node 'localhost' { # Install the IIS role WindowsFeature IIS { Ensure = 'Present' Name = 'Web-Server' } # Install the ASP .NET 4.5 role WindowsFeature AspNet45 { Ensure = 'Present' Name = 'Web-Asp-Net45' } } EnsureWebServer
Example 2 with external data
Data.psd1 $MyData = @{ NonNodeData = @{ ConfigFileContents = "Hello World! This file is managed by Rudder" } } HelloWorld.ps1 Configuration HelloWorld { Node 'localhost' { File HelloWorld { DestinationPath = "${RudderBase}\HelloWorld.txt" Ensure = "Present" Contents = $ConfigurationData.NonNodeData.ConfigFileContents } } HelloWorld -ConfigurationData /path/to/Data.psd1
config_file (must match: ^.*\.ps1$
)
Absolute path of the .ps1 configuration file
built_in_resource [windows]
This generic method defines if service should run or be stopped
dsc(tag).built_in_resource(scriptBlock, resourceName)
Apply a given DSC resource to the node.
Parameters
-
tag
parameter is purely informative and has no impact on the resource. -
ResourceName
must be the explicit name of the DSC resource you wish to apply -
ScriptBlock
must be a powershell script in plain text, returning an Hashtable containing the parameters to pass to the resource.
Note that this method can only apply built-in Windows resources. It will not be able to apply an external resource.
Example
If we want to apply a
Registry
resource. The resourceName
used will be Registry
And a potential
ScriptBlock could be:
$HKLM_SOFT="HKEY_LOCAL_MACHINE\SOFTWARE" $Ensure = "Present" $Key = $HKLM_SOFT + "\ExampleKey" $table = @{} $table.Add("Ensure", $Ensure) $table.Add("Key", $Key) $table.Add("ValueName", "RudderTest") $table.Add("ValueData", "TestData") $table
Note that all the ScriptBlock will be readable on the Rudder logs or in the policy files.
resourceName
resourceName
scriptBlock
Desired state for the resource
apply [windows]
Ensure that all MOF files under MOFFile are applied via DSC.
dsc(tag).apply()
Ensure that all MOF files contained under the target folder are applied via DSC on the target node.
environment
resource environment(name)
name
Name of the environment variable
States
variable_present [unix]
Enforce an environment variable value. Caution, the new environment variable will not be usable by the agent until it is restarted
environment(name).variable_present(value)
value
Value of the environment variable
file
resource file(path)
path
File path to manage
States
symlink_present_option [unix]
Create a symlink at a destination path and pointing to a source target. This is also possible to enforce its creation
file(path).symlink_present_option(destination, enforce)
destination
Destination file (absolute path on the target node)
enforce
Force symlink if file already exist (true or false)
symlink_present_force [unix]
Create a symlink at a destination path and pointing to a source target even if a file or directory already exists.
file(path).symlink_present_force(destination)
destination
Destination file (absolute path on the target node)
symlink_present [unix]
Create a symlink at a destination path and pointing to a source target except if a file or directory already exists.
file(path).symlink_present(destination)
destination
Destination file (absolute path on the target node)
report_content_tail [unix]
Report the tail of a file
file(path).report_content_tail(limit)
Report the tail of a file.
This method does nothing on the system, but only reports a partial content from a given file. This allows centralizing this information on the server, and avoid having to connect on each node to get this information.
Note
|
This method only works in "Full Compliance" reporting mode. |
Parameters
This is the file you want to report content from. The method will return an error if it does not exist.
The number of line to report.
Examples
# To get the 3 first line of /etc/hosts file_report_content("/etc/hosts", "3");
limit (can be empty, must match: ^\\d*$
)
Number of lines to report (default is 10)
report_content_head [unix]
Report the head of a file
file(path).report_content_head(limit)
Report the head of a file.
This method does nothing on the system, but only reports a partial content from a given file. This allows centralizing this information on the server, and avoid having to connect on each node to get this information.
Note
|
This method only works in "Full Compliance" reporting mode. |
Parameters
This is the file you want to report content from. The method will return an error if it does not exist.
The number of line to report.
Examples
# To get the 3 first line of /etc/hosts file_report_content("/etc/hosts", "3");
limit (can be empty, must match: ^\\d*$
)
Number of lines to report (default is 10)
report_content [unix]
Report the content of a file
file(path).report_content(regex, context)
Report the content of a file.
This method does nothing on the system, but only reports a complete or partial content from a given file. This allows centralizing this information on the server, and avoid having to connect on each node to get this information.
Note
|
This method only works in "Full Compliance" reporting mode. |
Parameters
This is the file you want to report content from. The method will return an error if it does not exist.
If empty, the method will report the whole file content. If set, the method will grep the file for the given regular expression, and report the result.
When specifying a regex, will add the number of lines of context around matches (default is 0, i.e. no context).
When reporting the whole file, this parameter is ignored.
Examples
# To get the whole /etc/hosts content file_report_content("/etc/hosts", "", ""); # To get lines starting by "nameserver" in /etc/resolv.conf file_report_content("/etc/resolv.conf", "^nameserver", ""); # To get lines containing "rudder" from /etc/hosts with 3 lines of context file_report_content("/etc/hosts", "rudder", "3");
context (can be empty, must match: ^\\d*$
)
Number of context lines when matching regex (default is 0)
regex (can be empty)
Regex to search in the file (empty for whole file)
replace_lines [unix]
Ensure that a line in a file is replaced by another one
file(path).replace_lines(line, replacement)
You can replace lines in a files, based on regular expression and captured pattern
Syntax
The content to match in the file is a PCRE regular expression, unanchored that you can replace with the content of replacement.
Content can be captured in regular expression, and be reused with the
notation ${match.1}
(for first matched content), ${match.2}
for
second, etc, and the special captured group ${match.0}
for the whole
text.
Example
Here is an example to remove enclosing specific tags
file_replace_lines("/PATH_TO_MY_FILE/file", "<my>(.*)<pattern>", "my ${match.1} pattern")
line
Line to match in the file
replacement
Line to add in the file as a replacement
present [windows, unix]
Create a file if it doesn’t exist
file(path).present()
lines_present [windows, unix]
Ensure that one or more lines are present in a file
file(path).lines_present(lines)
lines
Line(s) to add in the file
lines_absent [windows, unix]
Ensure that a line is absent in a specific location
file(path).lines_absent(lines)
lines
Line(s) to remove in the file
line_present_in_xml_tag [unix]
Ensure that a line is present in a tag in a specific location. The objective of this method is to handle XML-style files. Note that if the tag is not present in the file, it won’t be added, and the edition will fail.
file(path).line_present_in_xml_tag(tag, line)
line
Line to ensure is present inside the section
tag
Name of the XML tag under which lines should be added (not including the <> brackets)
line_present_in_ini_section [unix]
Ensure that a line is present in a section in a specific location. The objective of this method is to handle INI-style files.
file(path).line_present_in_ini_section(section, line)
line
Line to ensure is present inside the section
section
Name of the INI-style section under which lines should be added (not including the [] brackets)
keys_values_present [unix]
Ensure that the file contains all pairs of "key separator value", with arbitrary separator between each key and its value
file(path).keys_values_present(keys, separator)
This method ensures key-value pairs are present in a file.
Usage
This method will iterate over the key-value pairs in the dict, and:
-
If the key is not defined in the destination, add the key
separator + value line. -
If the key is already present in the file, replace the key
separator + anything by key + separator + value
This method always ignores spaces and tabs when replacing (which means
for example that key = value
will match the =
separator).
Keys are considered unique (to allow replacing the value), so you should use file_ensure_lines_present if you want to have multiple lines with the same key.
Example
If you have an initial file (/etc/myfile.conf
) containing:
key1 = something key3 = value3
To define key-value pairs, use the variable_dict or variable_dict_from_file methods.
For example, if you use the following content (stored in
/tmp/data.json
):
{
"key1": "value1",
"key2": "value2"
}
With the following policy:
# Define the `content` variable in the `configuration` prefix from the json file variable_dict_from_file("configuration", "content", "/tmp/data.json") # Enforce the presence of the key-value pairs file_ensure_keys_values("/etc/myfile.conf", "configuration.content", " = ")
The destination file (/etc/myfile.conf
) will contain:
key1 = value1 key3 = value3 key2 = value2
keys
Name of the dict structure (without "$\{}") containing the keys (keys of the dict), and values to define (values of the dict)
separator (can contain only white-space chars)
Separator between key and value, for example "=" or " " (without the quotes)
key_value_present_option [unix]
Ensure that the file contains a pair of "key separator value", with options on the spacing around the separator
file(path).key_value_present_option(key, value, separator, option)
Edit (or create) the file, and ensure it contains an entry key → value with arbitrary separator between the key and its value. If the key is already present, the method will change the value associated with this key.
key
Key to define
option (strict
, lax
)
Option for the spacing around the separator: strict, which prevent spacing (space or tabs) around separators, or lax which accepts any number of spaces around separators
separator (can contain only white-space chars)
Separator between key and value, for example "=" or " " (without the quotes)
value
Value to define
key_value_present_in_ini_section [unix]
Ensure that a key-value pair is present in a section in a specific location. The objective of this method is to handle INI-style files.
file(path).key_value_present_in_ini_section(section, name, value)
name
Name of the key to add or edit
section
Name of the INI-style section under which the line should be added or modified (not including the [] brackets)
value
Value of the key to add or edit
key_value_present [unix]
Ensure that the file contains a pair of "key separator value"
file(path).key_value_present(key, value, separator)
Edit (or create) the file, and ensure it contains an entry key → value with arbitrary separator between the key and its value. If the key is already present, the method will change the value associated with this key.
key
Key to define
separator (can contain only white-space chars)
Separator between key and value, for example "=" or " " (without the quotes)
value
Value to define
key_value_parameter_present_in_list [unix]
Ensure that one parameter exists in a list of parameters, on one single line, in the right hand side of a key→values line
file(path).key_value_parameter_present_in_list(key, key_value_separator, parameter, parameter_separator, leading_char_separator, closing_char_separator)
Edit the file, and ensure it contains the defined parameter in the list of values on the right hand side of a key→values line. If the parameter is not there, it will be added at the end, separated by parameter_separator. Optionnaly, you can define leading and closing character to enclose the parameters If the key does not exist in the file, it will be added in the file, along with the parameter
Example
If you have an initial file (/etc/default/grub
) containing
GRUB_CMDLINE_XEN="dom0_mem=16G"
To add parameter dom0_max_vcpus=32
in the right hand side of the line,
you’ll need the following policy
file_ensure_key_value_parameter_in_list("/etc/default/grub", "GRUB_CMDLINE", "=", "dom0_max_vcpus=32", " ", "\"", "\"");
closing_char_separator (can be empty)
closing character of the parameters
key
Full key name
key_value_separator (can contain only white-space chars)
character used to separate key and value in a key-value line
leading_char_separator (can be empty)
leading character of the parameters
parameter
String representing the sub-value to ensure is present in the list of parameters that form the value part of that line
parameter_separator (can contain only white-space chars)
Character used to separate parameters in the list
key_value_parameter_absent_in_list [unix]
Ensure that a parameter doesn’t exist in a list of parameters, on one single line, in the right hand side of a key→values line
file(path).key_value_parameter_absent_in_list(key, key_value_separator, parameter_regex, parameter_separator, leading_char_separator, closing_char_separator)
Edit the file, and ensure it does not contain the defined parameter in the list of values on the right hand side of a key→values line. If the parameter is there, it will be removed. Please note that the parameter can be a regular expression. It will also remove any whitespace character between the parameter and parameter_separator Optionally, you can define leading and closing character to enclose the parameters
Example
If you have an initial file (/etc/default/grub
) containing
GRUB_CMDLINE_XEN="dom0_mem=16G dom0_max_vcpus=32"
To remove parameter dom0_max_vcpus=32
in the right hand side of the
line, you’ll need the following policy
file_ensure_key_value_parameter_not_in_list("/etc/default/grub", "GRUB_CMDLINE", "=", "dom0_max_vcpus=32", " ", "\"", "\"");
closing_char_separator (can be empty)
closing character of the parameters
key
Full key name
key_value_separator (can contain only white-space chars)
character used to separate key and value in a key-value line
leading_char_separator (can be empty)
leading character of the parameters
parameter_regex
Regular expression matching the sub-value to ensure is not present in the list of parameters that form the value part of that line
parameter_separator (can contain only white-space chars)
Character used to separate parameters in the list
from_template_type [unix]
Build a file from a template
file(path).from_template_type(destination, template_type)
These methods write a file based on a provided template and the data available to the agent.
Usage
To use these methods (file_from_template_*
), you need to have:
-
a template file
-
data to fill this template
The template file should be somewhere on the local file system, so if you want to use a file shared from the policy server, you need to copy it first (using file_copy_from_remote_source).
It is common to use a specific folder to store those templates after
copy, for example in ${sys.workdir}/templates/
.
The data that will be used while expanding the template is the data available in the agent at the time of expansion. That means:
-
Agent’s system variables (
${sys.*}
, …) and conditions (linux
, …) -
data defined during execution (result conditions of generic methods, …)
-
conditions based on
condition_
generic methods -
data defined in ncf using
variable_*
generic methods, which allow for example to load data from local json or yaml files.
Template types
ncf currently supports three templating languages:
-
mustache templates, which are documented in file_from_template_mustache
-
jinja2 templates, which are documented in file_from_template_jinja2
-
CFEngine templates, which are a legacy implementation that is here for compatibility, and should not be used for new templates.
Example
Here is a complete example of templating usage:
The (basic) template file, present on the server in
/PATH_TO_MY_FILE/ntp.conf.mustache
(for syntax reference, see
file_from_template_mustache):
{{#classes.linux}}
server {{{vars.configuration.ntp.hostname}}}
{{/classes.linux}}
{{^classes.linux}}
server hardcoded.server.example
{{/classes.linux}}
And on your local node in /tmp/ntp.json
, the following json file:
{ "hostname": "my.hostname.example" }
And the following policy:
# Copy the file from the policy server file_copy_from_remote_source("/PATH_TO_MY_FILE/ntp.conf.mustache", "${sys.workdir}/templates/ntp.conf.mustache") # Define the `ntp` varibale in the `configuration` prefix from the json file variable_dict_from_file("configuration", "ntp", "/tmp/ntp.json") # Expand yout template file_from_template_type("${sys.workdir}/templates/ntp.conf.mustache", "/etc/ntp.conf", "mustache") # or # file_from_template_mustache("${sys.workdir}/templates/ntp.conf.mustache", "/etc/ntp.conf")
The destination file will contain the expanded content, for example on a Linux node:
server my.hostname.example
destination
Destination file (absolute path on the target node)
template_type
Template type (cfengine, jinja2 or mustache)
from_template_mustache [windows, unix]
Build a file from a mustache template
file(path).from_template_mustache(destination)
See file_from_template_type for general documentation about templates usage.
Syntax
Mustache is a logic-less templating language, available in a lot of languages, and used for file templating in Rudder. The mustache syntax reference is https://mustache.github.io/mustache.5.html.
We will here describe the way to get agent data into a template. Ass explained in the general templating documentation, we can access various data in a mustache template.
The main specificity compared to standard mustache syntax of prefixes in all expanded values:
-
classes
to access conditions -
vars
to access all variables
Here is how to display content depending on conditions definition:
{{#classes.my_condition}}
content when my_condition is defined
{{/classes.my_condition}}
{{^classes.my_condition}}
content when my_condition is *not* defined
{{/classes.my_condition}}
Note: You cannot use condition expressions here.
Here is how to display a scalar variable value (integer, string, …),
if you have defined
variable_string("variable_prefix", "my_variable", "my_value")
:
{{{vars.variable_prefix.my_variable}}}
We use the triple {{{ }}}
to avoid escaping html entities.
Iteration is done using a syntax similar to scalar variables, but applied on container variables.
-
Use
{{#vars.container}} content {{/vars.container}}
to iterate -
Use
{{{.}}}
for the current element value in iteration -
Use
{{{key}}}
for thekey
value in current element -
Use
{{{.key}}}
for thekey
value in current element (Linux only) -
Use
{{{@}}}
for the current element key in iteration (Linux only)
To iterate over a list, for example defined with:
variable_iterator("variable_prefix", "iterator_name", "a,b,c", ",")
Use the following file:
{{#vars.variable_prefix.iterator_name}}
{{{.}}} is the current iterator_name value
{{/vars.variable_prefix.iterator_name}}
Which will be expanded as:
a is the current iterator_name value b is the current iterator_name value c is the current iterator_name value
To iterate over a container defined by the following json file, loaded
with variable_dict_from_file("variable_prefix", "dict_name", "path")
:
{
"hosts": [
"host1",
"host2"
],
"files": [
{"name": "file1", "path": "/path1", "users": [ "user1", "user11" ] },
{"name": "file2", "path": "/path2", "users": [ "user2" ] }
],
"properties": {
"prop1": "value1",
"prop2": "value2"
}
}
Use the following template:
{{#vars.variable_prefix.dict_name.hosts}}
{{{.}}} is the current hosts value
{{/vars.variable_prefix.dict_name.hosts}}
# will display the name and path of the current file
{{#vars.variable_prefix.dict_name.files}}
{{{.name}}}: {{{.path}}}
{{/vars.variable_prefix.dict_name.files}}
# will display the users list of each file
{{#vars.variable_prefix.dict_name.files}}
{{{.name}}}:{{#users}} {{{.}}}{{/users}}
{{/vars.variable_prefix.dict_name.files}}
# will display the current properties key/value pair
{{#vars.variable_prefix.dict_name.properties}}
{{{@}}} -> {{{.}}}
{{/vars.variable_prefix.dict_name.properties}}
Which will be expanded as:
host1 is the current hosts value host2 is the current hosts value # will display the name and path of the current file file1: /path1 file2: /path2 # will display the users list of each file file1: user1 user11 file2: user2 # will display the current properties key/value pair prop1 -> value1 prop2 -> value2
Note: You can use {{#-top-}} … {{/-top-}}
to iterate over the top
level container.
destination
Destination file (absolute path on the target node)
from_template_jinja2 [unix]
Build a file from a jinja2 template
file(path).from_template_jinja2(destination)
See file_from_template_type for general documentation about templates usage.
This generic method will build a file from a jinja2 template using data (conditions and variables) found in the execution context.
Setup
It requires to have the jinja2 python module installed on the node, it
can usually be done in ncf with
package_present("python-jinja2", "", "", "")
.
Warning
|
If you are using a jinja2 version older than 2.7 trailing newlines will not be preserved in the destination file. |
Syntax
Jinja2 is a powerful templating language, running in Python. The Jinja2 syntax reference documentation is http://jinja.pocoo.org/docs/dev/templates/ which will likely be useful, as Jinja2 is very rich and allows a lot more that what is explained here.
This section presents some simple cases that cover what can be done with mustache templating, and the way the agent data is provided to the templating engine.
The main specificity of jinja2 templating is the use of two root containers:
-
classes
to access currently defined conditions -
vars
to access all currently defined variables
Note: You can add comments in the template, that will not be rendered in
the output file with {# … #}
.
You can extend the Jinja2 templating engine by adding custom FILTERS and
TESTS in the script
/var/rudder/configuration-repository/ncf/10_ncf_internals/modules/extensions/jinja2_custom.py
For instance, to add a filter to uppercase a string and a test if a
number is odd, you can create the file
/var/rudder/configuration-repository/ncf/10_ncf_internals/modules/extensions/jinja2_custom.py
on your Rudder server with the following content:
def uppercase(input): return input.upper() def odd(value): return True if (value % 2) else False FILTERS = {'uppercase': uppercase} TESTS = {'odd': odd}
These filters and tests will be usable in your jinja2 templates automatically.
To display content based on conditions definition:
{% if classes.my_condition is defined %}
display this if defined
{% endif %}
{% if not classes.my_condition is defined %}
display this if not defined
{% endif %}
Note: You cannot use condition expressions here.
You can also use other tests, for example other built-in ones or those
defined in jinja2_custom.py
:
{% if vars.variable_prefix.my_number is odd %}
display if my_number is odd
{% endif %}
Here is how to display a scalar variable value (integer, string, …),
if you have defined
variable_string("variable_prefix", "my_variable", "my_value")
:
{{ vars.variable_prefix.my_variable }}
You can also modify what is displayed by using filters. The built-in
filters can be extended in jinja2_custom.py
:
{{ vars.variable_prefix.my_variable | uppercase }}
Will display the variable in uppercase.
To iterate over a list, for example defined with:
variable_iterator("variable_prefix", "iterator_name", "a,b,c", ",")
Use the following file:
{% for item in vars.variable_prefix.iterator_name %}
{{ item }} is the current iterator_name value
{% endfor %}
Which will be expanded as:
a is the current iterator_name value b is the current iterator_name value c is the current iterator_name value
To iterate over a container defined by the following json file, loaded
with variable_dict_from_file("variable_prefix", "dict_name", "path")
:
{
"hosts": [
"host1",
"host2"
],
"files": [
{"name": "file1", "path": "/path1", "users": [ "user1", "user11" ] },
{"name": "file2", "path": "/path2", "users": [ "user2" ] }
],
"properties": {
"prop1": "value1",
"prop2": "value2"
}
}
Use the following template:
{% for item in vars.variable_prefix.dict_name.hosts %}
{{ item }} is the current hosts value
{% endfor %}
# will display the name and path of the current file
{% for file in vars.variable_prefix.dict_name.files %}
{{ file.name }}: {{ file.path }}
{% endfor %}
# will display the users list of each file
{% for file in vars.variable_prefix.dict_name.files %}
{{ file.name }}: {{ file.users|join(' ') }}
{% endfor %}
# will display the current properties key/value pair
{% for key, value in vars.variable_prefix.dict_name.properties.items() %}
{{ key }} -> {{ value }}
{% endfor %}
Which will be expanded as:
host1 is the current hosts value host2 is the current hosts value # will display the name and path of the current file file1: /path1 file2: /path2 # will display the users list of each file file1: user1 user11 file2: user2 # will display the current properties key/value pair prop1 -> value1 prop2 -> value2
destination
Destination file (absolute path on the target node)
from_string_mustache [unix]
Build a file from a mustache string
file(path).from_string_mustache(destination)
destination
Destination file (absolute path on the target node)
from_shared_folder [windows, unix]
Ensure that a file or directory is copied from Rudder shared folder (/var/rudder/configuration-repository/shared-files)
file(path).from_shared_folder(destination, hash_type)
destination
Destination file (absolute path on the target node)
hash_type (empty, sha256
, sha512
, md5
, sha1
)
Hash algorithm used to check if file is updated (sha256, sha512). Only used on Windows, ignored on Unix. default is sha256
from_remote_source_recursion [unix]
Ensure that a file or directory is copied from a policy server
file(path).from_remote_source_recursion(destination, recursion)
This method requires that the policy server is configured to accept copy of the source file or directory from the agents it will be applied to.
You can download a file from the shared files with:
/var/rudder/configuration-repository/shared-files/PATH_TO_YOUR_DIRECTORY_OR_FILE
destination
Destination file (absolute path on the target node)
recursion
Recursion depth to enforce for this path (0, 1, 2, …, inf)
from_remote_source [unix]
Ensure that a file or directory is copied from a policy server
file(path).from_remote_source(destination)
Note: This method uses the agent native file copy protocol, and can only download files from the policy server. To download a file from an external source, you can use HTTP with the file_download method.
This method requires that the policy server is configured to accept copy of the source file from the agents it will be applied to.
You can download a file from the shared files with:
/var/rudder/configuration-repository/shared-files/PATH_TO_YOUR_FILE
destination
Destination file (absolute path on the target node)
from_local_source_with_check [unix]
Ensure that a file or directory is copied from a local source if a check command succeeds
file(path).from_local_source_with_check(destination, check_command, rc_ok)
This method is a conditional file copy.
It allows comparing the source and destination, and if they are different, call a command with the source file path as argument, and only update the destination if the commands succeeds (i.e. returns a code included in rc_ok).
Examples
# To copy a configuration file only if it passes a config test: file_from_local_source_with_check("/tmp/program.conf", "/etc/program.conf", "program --config-test", "0");
This will:
-
Compare
/tmp/program.conf
and/etc/program.conf
, and returnkept
if files are the same -
If not, it will execute
program --config-test "/tmp/program.conf"
and check the return code -
If it is one of the
rc_ok
codes, it will copy/tmp/program.conf
into/etc/program.conf
and return a repaired -
If not, it will return an error
check_command
Command to run, it will get the source path as argument
destination
Destination file (absolute path on the target node)
rc_ok (can be empty)
Return codes to be considered as valid, separated by a comma (default is 0)
from_local_source_recursion [unix]
Ensure that a file or directory is copied from a local source
file(path).from_local_source_recursion(destination, recursion)
destination
Destination file (absolute path on the target node)
recursion
Recursion depth to enforce for this path (0, 1, 2, …, inf)
from_local_source [windows, unix]
Ensure that a file or directory is copied from a local source
file(path).from_local_source(destination)
destination
Destination file (absolute path on the target node)
from_http_server [windows, unix]
Download a file if it does not exist, using curl with a fallback on wget
file(path).from_http_server(destination)
This method finds a HTTP command-line tool and downloads the given source into the destination.
This method will NOT update the file after the first download until its removal.
It tries curl
first, and wget
as fallback.
destination
File destination (absolute path on the target node)
content [windows, unix]
Enforce the content of a file
file(path).content(lines, enforce)
enforce
Enforce the file to contain only line(s) defined (true or false)
lines
Line(s) to add in the file - if lines is a list, please use @{lines} to pass the iterator rather than iterating over each values
check_symlinkto [unix]
Checks if first file is symlink to second file
file(path).check_symlinkto(target)
This bundle will define a condition
file_check_symlinkto_${target}_{ok, reached, kept}
if the file
${symlink}
is a symbolic link to ${target}
, or
file_check_symlinkto_${target}_{not_ok, reached, not_kept, failed}
if
if it is not a symbolic link, or any of the files does not exist. The
symlink’s path is resolved to the absolute path and checked against the
target file’s path, which must also be an absolute path.
target
Target file (absolute path on the target node)
check_symlink [unix]
Checks if a file exists and is a symlink
file(path).check_symlink()
This bundle will define a condition
file_check_symlink_${file_name}_{ok, reached, kept}
if the file is a
symlink, or
file_check_symlink_${file_name}_{not_ok, reached, not_kept, failed}
if
the file is not a symlink or does not exist
check_socket [unix]
Checks if a file exists and is a socket
file(path).check_socket()
This bundle will define a condition
file_check_socket_${file_name}_{ok, reached, kept}
if the file is a
socket, or
file_check_socket_${file_name}_{not_ok, reached, not_kept, failed}
if
the file is not a socket or does not exist
check_regular [unix]
Checks if a file exists and is a regular file
file(path).check_regular()
This bundle will define a condition
file_check_regular_${file_name}_{ok, reached, kept}
if the file is a
regular_file, or
file_check_regular_${file_name}_{not_ok, reached, not_kept, failed}
if
the file is not a regular file or does not exist
check_hardlink [unix]
Checks if two files are the same (hard links)
file(path).check_hardlink(file_name_2)
This bundle will define a condition
file_check_hardlink_${file_name_1}_{ok, reached, kept}
if the two
files ${file_name_1}
and ${file_name_2}
are hard links of each
other, or
file_check_hardlink_${file_name_1}_{not_ok, reached, not_kept, failed}
if if the files are not hard links.
file_name_2
File name #2 (absolute path on the target node)
check_exists [unix]
Checks if a file exists
file(path).check_exists()
This bundle will define a condition
file_check_exists_${file_name}_{ok, reached, kept}
if the file exists,
or file_check_exists_${file_name}_{not_ok, reached, not_kept, failed}
if the file doesn’t exists
check_character_device [unix]
Checks if a file exists and is a character device
file(path).check_character_device()
This bundle will define a condition
file_check_character_device_${file_name}_{ok, reached, kept}
if the
file is a character device, or
file_check_character_device_${file_name}_{not_ok, reached, not_kept, failed}
if the file is not a character device or does not exist
check_block_device [unix]
Checks if a file exists and is a block device
file(path).check_block_device()
This bundle will define a condition
file_check_block_device_${file_name}_{ok, reached, kept}
if the file
is a block_device, or
file_check_block_device_${file_name}_{not_ok, reached, not_kept, failed}
if the file is not a block device or does not exist
check_FIFO_pipe [unix]
Checks if a file exists and is a FIFO/Pipe
file(path).check_FIFO_pipe()
This bundle will define a condition
file_check_FIFO_pipe_${file_name}_{ok, reached, kept}
if the file is a
FIFO, or
file_check_FIFO_pipe_${file_name}_{not_ok, reached, not_kept, failed}
if the file is not a fifo or does not exist
block_present_in_section [unix]
Ensure that a section contains exactly a text block
file(path).block_present_in_section(section_start, section_end, block)
block
Block representing the content of the section
section_end
End of the section
section_start
Start of the section
block_present [unix]
Ensure that a text block is present in a specific location
file(path).block_present(block)
block
Block(s) to add in the file
augeas_set [unix]
Use Augeas binaries to call Augtool commands and options to set a node label’s value.
file(path).augeas_set(value, lens, file)
Augeas is a tool that provides an abstraction layer for all the complexities that turn around editing files with regular expressions. It’s a tree based hierarchy tool, that handles system configuration files where you can securely modify your files and to do so you have to provide the path to the node label’s value. Augeas uses lenses which are like sort of modules that are in charge of identifying and converting files into tree and back. This way, you can manipulate at first the tree and then save changes into the configuration files on the system.
In this method, we introduce using augtool commands and options in order to set the value of a given node’s label in the parameters (which means that you simply want to modify your configuration file), this can be done by specifying the path to it. The method has in total 4 parameters: path, value, lens and hosts.
Actually there is two ways you can use this method, either you simply provide a path to the node’s label as a parameter or you specify a file associated with a lens then you put the regular path. When you only specify the path to the node’s label, your request will includes by default Augeas charging all the lenses and files, on the other hand, if you have a specific file for example such as a Json file and you want to associate it to the existing Json lens, then in that case, you need to fill in addition the file and the lens parameter, this way Augeas won’t load all its files and lenses except the ones you have specified.
The generic method will set a node label’s value on the agent, otherwise, if Augeas isn’t installed on the agent, it will produces an error. The method provides a way to make a backup of the file you modified before applying any changes on the node, you can find them in the '/var/rudder/modified-files/' directory.
Two uses cases examples:
In the first case, let’s suppose that you want to simply set the value
of the ip address of the first line in the /etc/hosts
file to
192.168.1.5
, to do so you need to provide the path and value
parameters.
file_augeas_set("/etc/hosts/1/ipaddr","192.168.1.5");
The second case includes two needs of using it, either you want to
prevent Augeas from charging all lenses and files while executing your
request or you want to associate the Hosts
lens with the /etc/hosts
file then set the value for the given path node.
file_augeas_set("/etc/hosts/1/ipaddr","192.168.1.5","Hosts","/etc/hosts");
file (can be empty)
The file specified by the user in case he wants to load a specified file associated with its lens
lens (can be empty)
The lens specified by the user in case he wants to load a specified lens associated with its file
value
The value to set
augeas_commands [unix]
Use Augeas binaries to execute augtool commands and options directly on the agent.
file(path).augeas_commands(variable_name, commands, autoload)
Augeas is a tool that provides an abstraction layer for all the complexities that turn around editing files with regular expressions. It’s a tree based hierarchy tool, that handles system configuration files where you can securely modify your files and to do so you have to provide the path to the node label’s value. Augeas uses lenses which are like sort of modules that are in charge of identifying and converting files into tree and back. This way, you can manipulate at first the tree and then save changes into the configuration files on the system.
This method gives the possibility to enter a list of augtool commands and options as a parameter. The method has in total 4 parameters: variable_prefix, variable_name, commands and autoload. Augtool provides bunch of other commands and options that you can use in this generic method such as 'match' to print the matches for a specific path expression, 'span' to print position in input file corresponding to tree, 'retrieve' to transform tree into text and 'save' to save all pending changes. If Augeas isn’t installed on the agent, it will produces an error.
The particular thing you may want to do with this method is using it depending on you needs and in two cases.
With autoload
The first case includes activating the autoload
option, it is true
by default, means you can leave the autoload
parameter’s field empty
and Augeas will accordingly charge all files and lenses before executing
the commands you have specified. Below is an example that will shows as
a result the configuration files that are parsed by default in
/files/etc
directory and then also print the content of the
sshd_config
file, you can either leave the autoload
parameter empty
or fill it by putting true
as a value.
file_augeas_commands("label","value","ls /files/etc \n print /files/etc/ssh/sshd_config","") file_augeas_commands("label","value","ls /files/etc \n print /files/etc/ssh/sshd_config","true")
Without autoload
The second case is when you deactivate that option which means that you
are specifying false
as parameter and in this case you have to charge
manually your files and lenses in the commands parameter by using the
set commands. Below is the second example where you can for example set
the lens and the file then verifying by checking the /augeas/load
path.
file_augeas_commands("label","value","set /augeas/load/Sshd/lens "Sshd.lns \n set /augeas/load/Sshd/incl "/etc/ssh/sshd_config" \n load \n print /augeas/load/Sshd \n print /augeas/load/Sshd \n print /files/etc/ssh/sshd_config","false")
autoload (empty, true
, false
)
Deactivate the autoload
option if you don’t want augeas to charge all
the files/lens, it’s true
by default.
commands
The augeas command(s)
variable_name
The variable to define, the full name will be variable_prefix.variable_name
absent [windows, unix]
Remove a file if it exists
file(path).absent()
template_expand [unix]
Warning
|
DEPRECATED: This method uses CFEngine’s templating which is deprecated and not portable across agents. Please use file_from_template_mustache or file_from_template_jinja2 instead. |
This is a bundle to expand a template in a specific location
file(path).template_expand(target_file, mode, owner, group)
group
Group of destination file
mode
Mode of destination file
owner
Owner of destination file
target_file
File name (with full path) where to expand the template
from_template [windows, unix]
Warning
|
DEPRECATED: This method uses CFEngine’s templating which is deprecated and not portable across agents. Please use file_from_template_mustache or file_from_template_jinja2 instead. |
Build a file from a legacy CFEngine template
file(path).from_template(destination)
See file_from_template_type for general documentation about templates usage.
destination
Destination file (absolute path on the target node)
group
resource group(name)
name
Group name
States
present [unix]
Create a group
group(name).present()
absent [unix]
Make sure a group is absent
group(name).absent()
http_request
resource http_request(method, url)
method
Method to call the URL (GET, POST, PUT, DELETE)
url
URL to query
States
content_headers [unix]
Make an HTTP request with a specific header
http_request(method, url).content_headers(content, headers)
Perform a HTTP request on the URL, method and headers provided and send the content provided. Will return an error if the request failed.
content
Content to send
headers (can be empty)
Headers to include in the HTTP request
check_status_headers [unix]
Checks status of an HTTP URL
http_request(method, url).check_status_headers(expected_status, headers)
Perform a HTTP request on the URL, method and headers provided and check that the response has the expected status code (ie 200, 404, 503, etc)
expected_status
Expected status code of the HTTP response
headers (can be empty)
Headers to include in the HTTP request (as a string, without ')
kernel_module
resource kernel_module(name)
name
Complete name of the kernel module, as seen by lsmod or listed in /proc/modules
States
not_loaded [unix]
Ensure that a given kernel module is not loaded on the system
kernel_module(name).not_loaded()
Ensure that a given kernel module is not loaded on the system. If the module is loaded, it will try to unload it using modprobe.
loaded [unix]
Ensure that a given kernel module is loaded on the system
kernel_module(name).loaded()
Ensure that a given kernel module is loaded on the system. If the module is not loaded, it will try to load it via modprobe.
enabled_at_boot [unix]
Ensure that a given kernel module will be loaded at system boot
kernel_module(name).enabled_at_boot()
Ensure that a given kernel module is enabled at boot on the system. This method only works on systemd systems. Rudder will look for a line matching the module name in a given section in the file:
-
/etc/modules-load.d/enabled_by_rudder.conf
on systemd systems
If the module is already enabled by a different option file than used by Rudder, it will add an entry in the file managed by Rudder listed above, and leave intact the already present one. The modifications are persistent and made line per line, meaning that this Generic Method will never remove lines in the configuration file but only add it if needed.
Please note that this method will not load the module nor configure it,
it will only enable its loading at system boot. If you want to force the
module to be loaded, use instead the method kernel_module_loaded
. If
you want to configure the module, use instead the method
kernel_module_configuration
.
configuration [unix]
Ensure that the modprobe configuration of a given kernel module is correct
kernel_module(name).configuration(configuration)
Ensure that the modprobe configuration of a given kernel module is correct. Rudder will search for the module configuration in a per-module dedicated section in /etc/modprobe.d/managed_by_rudder.conf.
-
If the module configuration is not found or incorrect, Rudder will (re-)create its configuration.
-
If the module is configured but with a different option file than used by Rudder, it will add the expected one in /etc/modprobe.d/managed_by_rudder.conf but will leave intact the already present one.
The configuration syntax must respect the one used by /etc/modprobe.d defined in the modprobe.d manual page.
# To pass a parameter to a module: options module_name parameter_name=parameter_value # To blacklist a module blacklist modulename # etc...
Notes:
If you want to force the module to be loaded at boot, use instead the
method kernel_module_enabled_at_boot
which uses other Rudder dedicated
files.
Example:
To pass options to a broadcom module
-
module_name
= b43 -
configuration
= options b43 nohwcrypt=1 qos=0
Will produce the resulting block in /etc/modprobe.d/managed_by_rudder.conf:
### b43 start section options b43 nohwcrypt=1 qos=0 ### b43 end section
configuration (must match: ^(alias|blacklist|install|options|remove|softdeps) +.*$
)
Complete configuration block to put in /etc/modprobe.d/
monitoring
resource monitoring(key)
key
Name of the parameter
States
template [unix]
Add a monitoring template to a node (requires a monitoring plugin)
monitoring(key).template()
This method assigns monitoring templates to a Rudder node. The Rudder plugin respective to each monitoring platform will apply those templates to the node.
parameter [unix]
Add a monitoring parameter to a node (requires a monitoring plugin)
monitoring(key).parameter(value)
This method adds monitoring parameters to rudder nodes. The monitoring parameters are used to pass configuration to the monitoring plugins running with Rudder. Expected keys and parameters are specific to each plugin and can be found in their respective documentation.
value
Value of the parameter
package
Package of the system
resource package(name)
name
Name of the package to check
States
verify_version [unix]
Verify if a package is installed in a specific version
package(name).verify_version(package_version)
package_version
Version of the package to verify (can be "latest" for latest version)
verify [unix]
Verify if a package is installed in its latest version available
package(name).verify()
state_windows [windows]
This method manage packages using a chocolatey on the system.
package(name).state_windows(Status, Provider, Params, Version, Source, ProviderParams, AutoUpgrade)
Install a windows package using a given provider
Parameters
Required args:
-
PackageName
Name of target package -
Status
can be "present" or "absent"
Optional args:
-
Provider
Provider used to installed the package -
Params
Package parameters, passed to the installer -
Version
can be "any", "latest" or any exact specific version number -
Source
"any" or any specific arch -
ProviderParams
provider specific options -
AutoUpgrade
default set to false
Providers
The method is a simple transcription of the cchoco
cChocoPaclageInstaller
DSC resource, adapted to Rudder. The DSC module
cchoco
must be installed on your node before trying to use this
method.
You can check the cchoco/chocolatey documentation to get more detailed informations on the parameters. WARNING: If some exceptions are thrown about undefined env PATH variable after fresh cchoco lib in rudder, you may need to reboot your machine or notify your system that the env variables have been changed.
AutoUpgrade (empty, true
, false
)
autoUpgrade, default to false
Params (can be empty)
params to pass to the package installation
Provider (empty, choco
)
default to choco
ProviderParams (can be empty)
provider parameters, default to choco
Source (can be empty)
source
Status (empty, present
, absent
)
Present, Absent and so on
Version (can be empty)
version, default to latest
state_options [unix]
Enforce the state of a package with options
package(name).state_options(version, architecture, provider, state, options)
See package_state for documentation.
architecture (can be empty)
Architecture of the package, can be an architecture name or "default" (defaults to "default")
options (can be empty)
Options no pass to the package manager (defaults to empty)
provider (empty, default
, yum
, apt
, zypper
, zypper_pattern
, slackpkg
, pkg
, ips
, nimclient
)
Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")
state (empty, present
, absent
)
State of the package, can be "present" or "absent" (defaults to "present")
version (can be empty)
Version of the package, can be "latest" for latest version or "any" for any version (defaults to "any")
state [unix]
Enforce the state of a package
package(name).state(version, architecture, provider, state)
These methods manage packages using a package manager on the system.
package_present
and package_absent
use a new package implementation,
different from package_install_*
, package_remove_*
and
package_verify_*
. It should be more reliable, and handle upgrades
better. It is compatible though, and you can call generic methods from
both implementations on the same host. The only drawback is that the
agent will have to maintain double caches for package lists, which may
cause a little unneeded overhead.
Package parameters
There is only one mandatory parameter, which is the package name to install. When it should be installed from a local package, you need to specify the full path to the package as name.
The version parameter allows specifying a version you want installed. It should be the complete versions string as used by the used package manager. This parameter allows two special values:
-
any which is the default value, and is satisfied by any version of the given package
-
latest which will ensure, at each run, that the package is at the latest available version.
The last parameter is the provider, which is documented in the next section.
You can use package_state_options to pass options to the underlying package manager (currently only with _apt package manager).
Package providers
This method supports several package managers. You can specify the package manager you want to use or let the method choose the default for the local system.
The package providers include a caching system for package information. The package lists (installed, available and available updates) are only updated when the cache expires, or when an operation is made by the agent on packages.
Note: The implementation of package operations is done in scripts
called modules, which you can find in
${sys.workdir}/modules/packages/
.
This package provider uses apt/dpkg to manage packages on the system. dpkg will be used for all local actions, and apt is only needed to manage update and installation from a repository.
This package provider uses yum/rpm to manage packages on the system. rpm will be used for all local actions, and yum is only needed to manage update and installation from a repository.
It is able to downgrade packages when specifying an older version.
This package provider uses zypper/rpm to manage packages on the system. rpm will be used for all local actions, and zypper is only needed to manage update and installation from a repository.
Note: If the package version you want to install contains an epoch, you
have to specify it in the version in the epoch:version
form, like
reported by zypper info
.
This package provider uses zypper with the -t pattern
option to manage
zypper patterns or meta-packages on the system.
Since a zypper pattern can be named differently than the rpm package
name providing it, please always use the exact pattern name (as listed
in the output of zypper patterns
) when using this provider.
Note: When installing a pattern from a local rpm file, Rudder assumes that the pattern is built following the official zypper documentation.
Older implementations of zypper patterns may not be supported by this module.
This provider doesn’t support installation from a file.
This package provider uses Slackware’s installpkg
and upgradepkg
tools to manage packages on the system
This package provider uses FreeBSD’s pkg to manage packages on the system. This provider doesn’t support installation from a file.
ips
This package provider uses Solaris’s pkg command to manage packages from IPS repositories on the system. This provider doesn’t support installation from a file.
nimclient
This package provider uses AIX’s nim client to manage packages from nim This provider doesn’t support installation from a file.
Examples
# To install postgresql in version 9.1 for x86_64 architecture package_present("postgresql", "9.1", "x86_64", ""); # To ensure postgresql is always in the latest available version package_present("postgresql", "latest", "", ""); # To ensure installing postgresql in any version package_present("postgresql", "", "", ""); # To ensure installing postgresql in any version, forcing the yum provider package_present("postgresql", "", "", "yum"); # To ensure installing postgresql from a local package package_present("/tmp/postgresql-9.1-1.x86_64.rpm", "", "", ""); # To remove postgresql package_absent("postgresql", "", "", "");
See also : package_present, package_absent, package_state_options
architecture (can be empty)
Architecture of the package, can be an architecture name or "default" (defaults to "default")
provider (empty, default
, yum
, apt
, zypper
, zypper_pattern
, slackpkg
, pkg
, ips
, nimclient
)
Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")
state (empty, present
, absent
)
State of the package, can be "present" or "absent" (defaults to "present")
version (can be empty)
Version of the package, can be "latest" for latest version or "any" for any version (defaults to "any")
present [unix]
Enforce the presence of a package
package(name).present(version, architecture, provider)
See package_state for documentation.
architecture (can be empty)
Architecture of the package, can be an architecture name or "default" (defaults to "default")
provider (empty, default
, yum
, apt
, zypper
, zypper_pattern
, slackpkg
, pkg
, ips
, nimclient
)
Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")
version (can be empty)
Version of the package, can be "latest" for latest version or "any" for any version (defaults to "any")
check_installed [unix]
Verify if a package is installed in any version
package(name).check_installed()
This bundle will define a condition
package_check_installed_${file_name}_{ok, reached, kept}
if the
package is installed, or
package_check_installed_${file_name}_{not_ok, reached, not_kept, failed}
if the package is not installed
absent [unix]
Enforce the absence of a package
package(name).absent(version, architecture, provider)
See package_state for documentation.
architecture (can be empty)
Architecture of the package, can be an architecture name or "default" (defaults to "default")
provider (empty, default
, yum
, apt
, zypper
, zypper_pattern
, slackpkg
, pkg
, ips
, nimclient
)
Package provider to use, can be "yum", "apt", "zypper", "zypper_pattern", "slackpkg", "pkg", "ips", "nimclient" or "default" for system default package manager (defaults to "default")
version (can be empty)
Version of the package or "any" for any version (defaults to "any")
remove [unix]
Warning
|
DEPRECATED: Use package_absent instead. |
Remove a package
package(name).remove()
Example:
methods: "any" usebundle => package_remove("htop");
install_version_cmp_update [unix]
Warning
|
DEPRECATED: Use package_present instead. |
Install a package or verify if it is installed in a specific version, or higher or lower version than a version specified, optionally test update or not (Debian-, Red Hat- or SUSE-like systems only)
package(name).install_version_cmp_update(version_comparator, package_version, action, update_policy)
Example:
methods: "any" usebundle => package_install_version_cmp_update("postgresql", ">=", "9.1", "verify", "false");
action
Action to perform, can be add, verify (defaults to verify)
package_version
The version of the package to verify (can be "latest" for latest version)
update_policy
While verifying packages, check against latest version ("true") or just installed ("false")
version_comparator (==
, ⇐
, >=
, <
, >
, !=
)
Comparator between installed version and defined version, can be ==,⇐,>=,<,>,!=
install_version_cmp [unix]
Warning
|
DEPRECATED: Use package_present instead. |
Install a package or verify if it is installed in a specific version, or higher or lower version than a version specified
package(name).install_version_cmp(version_comparator, package_version, action)
Example:
methods: "any" usebundle => package_install_version_cmp("postgresql", ">=", "9.1", "verify");
action
Action to perform, can be add, verify (defaults to verify)
package_version
The version of the package to verify (can be "latest" for latest version)
version_comparator (==
, ⇐
, >=
, <
, >
, !=
)
Comparator between installed version and defined version, can be ==,⇐,>=,<,>,!=
install_version [unix]
Warning
|
DEPRECATED: Use package_present instead. |
Install or update a package in a specific version
package(name).install_version(package_version)
package_version
Version of the package to install (can be "latest" to install it in its latest version)
install [unix]
Warning
|
DEPRECATED: Use package_present instead. |
Install or update a package in its latest version available
package(name).install()
permissions
resource permissions(path)
path
Path to edit
States
user_acl_present [unix]
Verify that an ace is present on a file or directory for a given user. This method will make sure the given ace is present in the POSIX ACL of the target.
permissions(path).user_acl_present(recursive, user, ace)
The permissions_*acl_*
manage the POSIX ACL on files and directories.
Please note that the mask will be automatically recalculated when editing ACLs.
Parameters
Path can be globbing with the following format:
-
matches any filename or directory at one level, e.g.
.cf
will match all files in one directory that end in .cf but it won’t search across directories./.cf
on the other hand will look two levels deep. -
?
matches a single letter -
[a-z]
matches any letter from a to z -
{x,y,anything}
will match x or y or anything.
Can be:
-
true
to apply the given aces to folder and sub-folders and files. -
or
false
to apply to the strict match ofPath
If left blank, recursivity will automatically be set to false
Username
to enforce the ace, being the Linux account name. This method
can only handle one username.
The operator can be:
-
+
to add the given ACE to the current ones. -
-
to remove the given ACE to the current ones. -
=
to force the given ACE to the current ones. -
empty
if no operator is specified, it will be interpreted as=
.
ACE must respect the classic:
-
^[+-=]?(?=.*[rwx])r?w?x?$
Example
Given a file with the following getfacl output:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx user:bob:rwx group::r-- mask::rwx other::---
Applying this method with the following parameters:
-
path
: /tmp/myTestFile -
recursive
: false -
user
: bob -
ace
: -rw
Will transform the previous ACLs in:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx user:bob:--x group::r-- mask::r-x other::---
ace (must match: ^[+-=]?(?=.*[rwx])r?w?x?$
)
ACE to enforce for the given user.
recursive (empty, true
, false
)
Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false").
user
Username of the Linux account.
user_acl_absent [unix]
Verify that an ace is absent on a file or directory for a given user. This method will make sure that no ace is present in the POSIX ACL of the target.
permissions(path).user_acl_absent(recursive, user)
The permissions_*acl_*
manage the POSIX ACL on files and directories.
Please note that the mask will be automatically recalculated when editing ACLs.
Parameters
Path can be a regex with the following format:
-
matches any filename or directory at one level, e.g.
.cf
will match all files in one directory that end in .cf but it won’t search across directories./.cf
on the other hand will look two levels deep. -
?
matches a single letter -
[a-z]
matches any letter from a to z -
{x,y,anything}
will match x or y or anything.
Can be:
-
true
to apply the given aces to folder and sub-folders and files. -
or
false
to apply to the strict match ofPath
If left blank, recursivity will automatically be set to false
Username
to enforce the ace absence, being the Linux account name.
This method can only handle one username.
Example
Given a file with the following getfacl output:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx user:bob:rwx group::r-- mask::rwx other::---
Applying this method with the following parameters:
-
path
: /tmp/myTestFile -
recursive
: false -
user
: bob
Will transform the previous ACLs in:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx group::r-- mask::r-- other::---
recursive (empty, true
, false
)
Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")
user
Username of the Linux account.
type_recursion [unix]
Ensure that a file or directory is present and has the right mode/owner/group
permissions(path).type_recursion(mode, owner, group, type, recursion)
group (can be empty)
Group of the path to edit
mode (can be empty)
Mode of the path to edit
owner (can be empty)
Owner of the path to edit
recursion
Recursion depth to enforce for this path (0, 1, 2, …, inf)
type
Type of the path to edit (all/files/directories)
recursive [unix]
Verify if a file or directory has the right permissions recursively
permissions(path).recursive(mode, owner, group)
group (can be empty)
Group to enforce
mode (can be empty)
Mode to enforce
owner (can be empty)
Owner to enforce
posix_acls_absent [unix]
Ensure that files or directories has no ACLs set
permissions(path).posix_acls_absent(recursive)
The permissions_*acl_*
manage the POSIX ACL on files and directories.
Parameters
Path can be globbing with the following format:
-
-
matches any filename or directory at one level, e.g. *.cf will match all files in one directory that end in .cf but it won’t search across directories. /.cf on the other hand will look two levels deep.
-
-
? matches a single letter
-
[a-z] matches any letter from a to z
-
\{x,y,anything} will match x or y or anything.
Can be:
-
true
to apply the given aces to folder and sub-folders and files. -
or
false
to apply to the strict match ofPath
If left blank, recursivity will automatically be set to false
Example
The method has basically the same effect as setfacl -b <path>
.
Given a file with the following getfacl output:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx user:vagrant:rwx group::r-- mask::rwx other::---
It will remove all ACLs, and only let classic rights, here:
root@server# getfacl myTestFile # file: myTestFile # owner: root # group: root user::rwx group::r-- other::--- root@server# ls -l myTestFile -rwxr----- 1 root root 0 Mar 22 11:24 myTestFile root@server#
recursive (empty, true
, false
)
Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")
other_acl_present [unix]
Verify that the other ace given is present on a file or directory. This method will make sure the given other ace is present in the POSIX ACL of the target for.
permissions(path).other_acl_present(recursive, other)
The permissions_*acl_*
manage the POSIX ACL on files and directories.
Please note that the mask will be automatically recalculated when editing ACLs.
Parameters
Path can be a regex with the following format:
-
matches any filename or directory at one level, e.g.
.cf
will match all files in one directory that end in .cf but it won’t search across directories./.cf
on the other hand will look two levels deep. -
?
matches a single letter -
[a-z]
matches any letter from a to z -
{x,y,anything}
will match x or y or anything.
Can be:
-
true
to apply the given aces to folder and sub-folders and files. -
or
false
to apply to the strict match ofPath
If left blank, recursivity will automatically be set to false
The operator can be:
-
+
to add the given ACE to the current ones. -
-
to remove the given ACE to the current ones. -
=
to force the given ACE to the current ones. -
empty
if no operator is specified, it will be interpreted as=
.
ACE must respect the classic:
-
^[+-=]?(?=.*[rwx])r?w?x?$
Example
Given a file with the following getfacl output:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx user:bob:rwx group::r-- mask::rwx other::r-x
Applying this method with the following parameters:
-
path
: /tmp/myTestFile -
recursive
: false -
other ace
: -rw
Will transform the previous ACLs in:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx user:bob:rwx group::r-- mask::rwx other::--x
other (must match: ^[+-=]?(?=.*[rwx])r?w?x?$
)
ACE to enforce for the given other.
recursive (empty, true
, false
)
Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")
ntfs [windows]
Ensure NTFS permisions on a file for a given user.
permissions(path).ntfs(user, rights, accesstype, propagationpolicy)
Ensure that the correct NTFS permisions are applied on a file for a given user.
Inheritence and propagation flags can also be managed. If left blank, no propagation will be set.
To manage effective propagation or effective access, please disable the inheritance on the file before applying this generic method.
Note: that the Synchronize
permission may not work in some cases. This
is a known bug.
Right validate set:
None, ReadData, ListDirectory, WriteData, CreateFiles, AppendData, CreateDirectories, ReadExtendedAttributes, WriteExtendedAttributes, ExecuteFile, Traverse, DeleteSubdirectoriesAndFiles, ReadAttributes, WriteAttributes, Write, Delete, ReadPermissions, Read, ReadAndExecute, Modify, ChangePermissions, TakeOwnership, Synchronize, FullControl
AccesType validate set:
Allow, Deny
PropagationPolicy validate set:
ThisFolderOnly, ThisFolderSubfoldersAndFiles, ThisFolderAndSubfolders, ThisFolderAndFiles, SubfoldersAndFilesOnly, SubfoldersOnly, FilesOnly
accesstype (Allow
, Deny
, empty)
"Allow" or "Deny"
propagationpolicy (ThisFolderOnly
, ThisFolderSubfoldersAndFiles
, ThisFolderAndSubfolders
, ThisFolderAndFiles
, SubfoldersAndFilesOnly
, SubfoldersOnly
, FilesOnly
, empty)
Define the propagation policy of the access rule that Rudder is applying
rights
Comma separated right list
user
DOMAIN\Account
group_acl_present [unix]
Verify that an ace is present on a file or directory for a given group. This method will make sure the given ace is present in the POSIX ACL of the target for the given group.
permissions(path).group_acl_present(recursive, group, ace)
The permissions_*acl_*
manage the POSIX ACL on files and directories.
Please note that the mask will be automatically recalculated when editing ACLs.
Parameters
Path can be a regex with the following format:
-
matches any filename or directory at one level, e.g.
.cf
will match all files in one directory that end in .cf but it won’t search across directories./.cf
on the other hand will look two levels deep. -
?
matches a single letter -
[a-z]
matches any letter from a to z -
{x,y,anything}
will match x or y or anything.
Can be:
-
true
to apply the given aces to folder and sub-folders and files. -
or
false
to apply to the strict match ofPath
If left blank, recursivity will automatically be set to false
Group
to enfoorce the ace, being the Linux account name. This method
can only handle one groupname.
The operator can be:
-
+
to add the given ACE to the current ones. -
-
to remove the given ACE to the current ones. -
=
to force the given ACE to the current ones. -
empty
if no operator is specified, it will be interpreted as=
.
ACE must respect the classic:
-
^[+-=]?(?=.*[rwx])r?w?x?$
Example
Given a file with the following getfacl output:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx group::r-- group:bob:rwx mask::rwx other::---
Applying this method with the following parameters:
-
path
: /tmp/myTestFile -
recursive
: false -
group
: bob -
ace
: -rw
Will transform the previous ACLs in:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx group::r-- group:bob:--x mask::r-x other::---
ace (must match: ^[+-=]?(?=.*[rwx])r?w?x?$
)
ACE to enforce for the given group.
group
Group name
recursive (empty, true
, false
)
Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")
group_acl_absent [unix]
Verify that an ace is absent on a file or directory for a given group. This method will make sure that no ace is present in the POSIX ACL of the target.
permissions(path).group_acl_absent(recursive, group)
The permissions_*acl_*
manage the POSIX ACL on files and directories.
Please note that the mask will be automatically recalculated when editing ACLs.
Parameters
Path can be a regex with the following format:
-
matches any filename or directory at one level, e.g.
.cf
will match all files in one directory that end in .cf but it won’t search across directories./.cf
on the other hand will look two levels deep. -
?
matches a single letter -
[a-z]
matches any letter from a to z -
{x,y,anything}
will match x or y or anything.
Can be:
-
true
to apply the given aces to folder and sub-folders and files. -
or
false
to apply to the strict match ofPath
If left blank, recursivity will automatically be set to false
Username
to enforce the ace absence, being the Linux account name.
This method can only handle one groupname.
Example
Given a file with the following getfacl output:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx group::r-- group:bob:rwx mask::rwx other::---
Applying this method with the following parameters:
-
path
: /tmp/myTestFile -
recursive
: false -
group
: bob
Will transform the previous ACLs in:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx group::r-- mask::r-- other::---
group
Group name
recursive (empty, true
, false
)
Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")
dirs_recursive [unix]
Verify if a directory has the right permissions recursively
permissions(path).dirs_recursive(mode, owner, group)
group (can be empty)
Group to enforce
mode (can be empty)
Mode to enforce
owner (can be empty)
Owner to enforce
dirs [unix]
Verify if a directory has the right permissions non recursively
permissions(path).dirs(mode, owner, group)
group (can be empty)
Group to enforce
mode (can be empty)
Mode to enforce
owner (can be empty)
Owner to enforce
acl_entry [unix]
Verify that an ace is present on a file or directory. This method will append the given aces to the current POSIX ACLs of the target.
permissions(path).acl_entry(recursive, user, group, other)
The permissions_*acl_*
manage the POSIX ACL on files and directories.
Please note that the mask will be automatically recalculated when editing ACLs.
Parameters
Path can be a regex with the following format:
-
matches any filename or directory at one level, e.g.
.cf
will match all files in one directory that end in .cf but it won’t search across directories./.cf
on the other hand will look two levels deep. -
?
matches a single letter -
[a-z]
matches any letter from a to z -
{x,y,anything}
will match x or y or anything.
Can be:
-
true
to apply the given aces to folder and sub-folders and files. -
or
false
to apply to the strict match ofPath
If left blank, recursivity will automatically be set to false
ACE for user and group can be left blank if they do not need any specification. If fulfill, they must respect the format:
<username|groupname>:<operator><mode>
with:
-
username
being the Linux account name -
groupname
the Linux group name -
Current
owner user
andowner group
can be designed by the character*
The operator can be:
-
+
to add the given ACE to the current ones. -
-
to remove the given ACE to the current ones. -
=
to force the given ACE to the current ones.
You can define multiple ACEs by separating them with commas.
ACE for other must respect the classic:
-
[+-=]r?w?x?
It can also be left blank to let theOther
ACE unchanged.
Example
Given a file with the following getfacl output:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rwx user:bob:rwx group::r-- mask::rwx other::---
Applying this method with the following parameters:
-
path
: /tmp/myTestFile -
recursive
: false -
user
: *:-x, bob: -
group
: *:+rw -
other
: =r
Will transform the previous ACLs in:
root@server# getfacl /tmp/myTestFile getfacl: Removing leading '/' from absolute path names # file: tmp/myTestFile # owner: root # group: root user::rw- user:bob:--- group::rw- mask::rw- other::r--
This method can not remove a given ACE, see here how the user bob ACE is handled.
group (can be empty, must match: $|(([A-z0-9._-]+|\\*):([+-=]r?w?x?)?,? *)+$
)
Group acls, comma separated, like: wheel:+wx, anon:-rwx
other (can be empty, must match: $|[+-=^]r?w?x?$
)
Other acls, like -x
recursive (empty, true
, false
)
Recursive Should ACLs cleanup be recursive, "true" or "false" (defaults to "false")
user (can be empty, must match: $|(([A-z0-9._-]+|\\*):([+-=]r?w?x?)?,? *)+$
)
User acls, comma separated, like: bob:+rwx, alice:-w
registry
resource registry(key)
key
Registry key (ie, HKLM:\Software\Rudder)
States
key_present [windows]
This generic method checks that a Registry Key exists
registry(key).key_present()
Create a Registry Key if it does not exist. There are two different supported syntaxes to describe a Registry Key:
-
with short drive name and ":" like HKLM:\SOFTWARE\myKey
-
with long drive name and without ":" like HKEY_LOCAL_MACHINE:\SOFTWARE\myKey
Please, note that Rudder can not create new drive and new "first-level" Registry Keys.
key_absent [windows]
This generic method checks that a registry key does not exists
registry(key).key_absent()
Remove a Registry Key if it is present on the system.
There are two different supported syntaxes to describe a Registry Key:
-
with short drive name and ":" like HKLM:\SOFTWARE\myKey
-
with long drive name and without ":" like HKEY_LOCAL_MACHINE:\SOFTWARE\myKey
Please, note that Rudder can not remove drives and "first-level" Registry Keys.
entry_present [windows]
This generic method defines if a registry entry exists with the correct value
registry(key).entry_present(entry, value, registryType)
entry
Registry entry
registryType
Registry value type (String, ExpandString, MultiString, Dword, Qword)
value
Registry value
entry_absent [windows]
This generic method checks that a registry entry does not exists
registry(key).entry_absent(entry)
entry
Registry entry name
schedule
resource schedule(job_id)
job_id
A string to identify this job
States
simple_stateless [unix]
Trigger a repaired outcome when a job should be run (without checks)
schedule(job_id).simple_stateless(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days)
This bundle will define a condition
schedule_simple_${job_id}_{kept,repaired,not_ok,ok,reached}
-
_ok or _kept for when there is nothing to do
-
_repaired if the job should run
-
_not_ok and _reached have their usual meaning No effort is done to check if a run has already been done for this period or not. If the agent is run twice, the job will be run twice, and if the agent is not run, the job will no be run.
agent_periodicity
Agent run interval (in minutes)
max_execution_delay_hours
On how many hours you want to spread the job
max_execution_delay_minutes
On how many minutes you want to spread the job
periodicity_days
Desired job run interval (in days)
periodicity_hours
Desired job run interval (in hours)
periodicity_minutes
Desired job run interval (in minutes)
start_on_day_of_week
At which day of week should be the first run
start_on_hours
At which hour should be the first run
start_on_minutes
At which minute should be the first run
simple_nodups [unix]
Trigger a repaired outcome when a job should be run (avoid running twice)
schedule(job_id).simple_nodups(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days)
This bundle will define a condition
schedule_simple_${job_id}_{kept,repaired,not_ok,ok,reached}
-
_ok or _kept for when there is nothing to do
-
_repaired if the job should run
-
_not_ok and _reached have their usual meaning If the agent is run twice (for example from a manual run), the jo is run only once. However if the agent run is skipped during the period, the job is never run.
agent_periodicity
Agent run interval (in minutes)
max_execution_delay_hours
On how many hours you want to spread the job
max_execution_delay_minutes
On how many minutes you want to spread the job
periodicity_days
Desired job run interval (in days)
periodicity_hours
Desired job run interval (in hours)
periodicity_minutes
Desired job run interval (in minutes)
start_on_day_of_week
At which day of week should be the first run
start_on_hours
At which hour should be the first run
start_on_minutes
At which minute should be the first run
simple_catchup [unix]
Trigger a repaired outcome when a job should be run (avoid losing a job)
schedule(job_id).simple_catchup(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days)
This bundle will define a condition
schedule_simple_${job_id}_{kept,repaired,not_ok,ok,reached}
-
_ok or _kept for when there is nothing to do
-
_repaired if the job should run
-
_not_ok and _reached have their usual meaning If the agent run is skipped during the period, method tries to catchup the run on next agent run. If the agent run is skipped twice,, only one run is catched up. If the agent is run twice (for example from a manual run), the job is run only once.
agent_periodicity
Agent run interval (in minutes)
max_execution_delay_hours
On how many hours you want to spread the job
max_execution_delay_minutes
On how many minutes you want to spread the job
periodicity_days
Desired job run interval (in days)
periodicity_hours
Desired job run interval (in hours)
periodicity_minutes
Desired job run interval (in minutes)
start_on_day_of_week
At which day of week should be the first run
start_on_hours
At which hour should be the first run
start_on_minutes
At which minute should be the first run
simple [unix]
Trigger a repaired outcome when a job should be run
schedule(job_id).simple(agent_periodicity, max_execution_delay_minutes, max_execution_delay_hours, start_on_minutes, start_on_hours, start_on_day_of_week, periodicity_minutes, periodicity_hours, periodicity_days, mode)
This method compute the expected time for running the job, based on the parameters and splayed uing system ids, and define a conditions based on this computation:
-
schedule_simple_${job_id}_kept
if the job should not be run now -
schedule_simple_${job_id}_repaired
if the job should be run -
schedule_simple_${job_id}_error
if their is an inconsistency in the method parameters
Example
If you want to run a job, at every hour and half-hour (0:00 and 0:30), with no spread across system, with an agent running with default schedule of 5 minutes, and making sure that the job is run (if the agent couldn’t run it, then at the next agent execution the job should be run), you will call the method with the following parameters:
schedule_simple("job_schedule_id", "5", "0", "0", "0", "0", "0", "30", "0", "0", "catchup") `` During each run right after o'clock and half-hour, this method will define the condition schedule_simple_job_schedule_id_repaired, that you can use as a condition for a generic method `command_execution` #### agent_periodicity Agent run interval (in minutes) #### max_execution_delay_hours On how many hours you want to spread the job #### max_execution_delay_minutes On how many minutes you want to spread the job #### mode "nodups": avoid duplicate runs in the same period / "catchup": avoid duplicates and one or more run have been missed, run once before next period / "stateless": no check is done on past runs #### periodicity_days Desired job run interval (in days) #### periodicity_hours Desired job run interval (in hours) #### periodicity_minutes Desired job run interval (in minutes) #### start_on_day_of_week At which day of week should be the first run #### start_on_hours At which hour should be the first run #### start_on_minutes At which minute should be the first run
service
resource service(service_regex)
service_regex
Regular expression used to select a process in ps output
States
stopped [windows, unix]
Ensure that a service is stopped using the appropriate method
service(service_regex).stopped()
status [windows]
This generic method defines if service should run or be stopped
service(service_regex).status(status)
status (Stopped
, Running
)
Desired state for the user - can be 'Stopped' or 'Running'
started_path [unix]
Ensure that a service is running using the appropriate method, specifying the path of the service in the ps output, or using Windows task manager
service(service_regex).started_path(service_path)
service_path
Service with its path, as in the output from 'ps'
started [windows, unix]
Ensure that a service is running using the appropriate method
service(service_regex).started()
restart [windows, unix]
Restart a service using the appropriate method
service(service_regex).restart()
See service_action for documentation.
reload [unix]
Reload a service using the appropriate method
service(service_regex).reload()
See service_action for documentation.
enabled [windows, unix]
Force a service to be started at boot
service(service_regex).enabled()
disabled [unix]
Force a service not to be enabled at boot
service(service_regex).disabled()
check_started_at_boot [unix]
Check if a service is set to start at boot using the appropriate method
service(service_regex).check_started_at_boot()
check_running_ps [unix]
Check if a service is running using ps
service(service_regex).check_running_ps()
check_running [unix]
Check if a service is running using the appropriate method
service(service_regex).check_running()
check_disabled_at_boot [unix]
Check if a service is set to not start at boot using the appropriate method
service(service_regex).check_disabled_at_boot()
action [unix]
Trigger an action on a service using the appropriate tool
service(service_regex).action(action)
The service_*
methods manage the services running on the system.
Parameters
The name of the service is the name understood by the service manager,
except for the is-active-process
action, where it is the regex to
match against the running processes list.
The action is the name of an action to run on the given service. The following actions can be used:
-
start
-
stop
-
restart
-
reload
(orrefresh
) -
is-active
(orstatus
) -
is-active-process
(in this case, the "service" parameter is the regex to match againt process list) -
enable
-
disable
-
is-enabled
Other actions may also be used, depending on the selected service manager.
Implementation
These methods will detect the method to use according to the platform.
You can run the methods with an info
verbosity level to see which
service manager will be used for a given action.
Warning
|
Due to compatibility issues when mixing calls to systemctl and service/init.d, when an init script exists, we will not use systemctl compatibility layer but directly service/init.d. |
The supported service managers are:
-
systemd (any unkown action will be passed directly)
-
upstart
-
smf (for Solaris)
-
service command (for non-boot actions, any unknown action will be passed directly)
-
/etc/init.d scripts (for non-boot actions, any unknown action will be passed directly)
-
SRC (for AIX) (for non-boot actions)
-
chkconfig (for boot actions)
-
update-rc.d (for boot actions)
-
chitab (for boot actions)
-
links in /etc/rcX.d (for boot actions)
-
Windows services
Examples
# To restart the apache2 service service_action("apache2", "restart"); service_restart("apache2");
action
Action to trigger on the service (start, stop, restart, reload, …)
stop [unix]
Warning
|
DEPRECATED: This is an action that should not be used in the general case. If you really want to call the stop method, use service_action. Otherwise, simply call service_stopped |
Stop a service using the appropriate method
service(service_regex).stop()
See service_action for documentation.
start [unix]
Warning
|
DEPRECATED: This is an action that should not be used in the general case. If you really want to call the start method, use service_action. Otherwise, simply call service_started |
Start a service using the appropriate method
service(service_regex).start()
See service_action for documentation.
restart_if [unix]
Warning
|
DEPRECATED: Use service_restart with a condition |
Restart a service using the appropriate method if the specified class is true, otherwise it is considered as not required and success classes are returned.
service(service_regex).restart_if(trigger_class)
See service_action for documentation.
trigger_class
class(es) which will trigger the restart of Service "(package_service_installed|service_conf_changed)" by example
sharedfile
resource sharedfile(target_uuid, file_id)
file_id (must match: ^[A-z0-9._-]+$
)
Unique name that will be used to identify the file on the receiver
target_uuid
Which node to share the file with
States
to_node [unix]
This method shares a file with another Rudder node
sharedfile(target_uuid, file_id).to_node(file_path, ttl)
This method shares a file with another Rudder node using a unique file identifier.
Read the Rudder documentation for a high level overview of file sharing between nodes.
The file will be kept on the policy server and transmitted to the destination node’s policy server if it is different. It will be kept on this server for the destination node to download as long as it is not replaced by a new file with the same id or remove by expiration of the TTL.
Parameters
This section describes the generic method parameters.
target_uuid
The node you want to share this file with. The uuid of a node is visible
in the Nodes details (in the Web interface) or by entering
rudder agent info
on the target node.
This is a name that will be used to identify the file in the target node. It should be unique and describe the file content.
The local absolute path of the file to share.
The TTL can be:
-
A simple integer, in this case it is assumed to be a number of seconds
-
A string including units indications, the possible units are:
-
days, day or d
-
hours, hour, or h
-
minutes, minute, or m
-
seconds, second or s
The ttl value can look like 1day 2hours 3minutes 4seconds or can be abbreviated in the form 1d 2h 3m 4s, or without spaces 1d2h3m4s or any combination like 1day2h 3minute 4seconds Any unit can be skipped, but the decreasing order needs to be respected.
This is a name that will be used to identify the file once stored on the server. It should be unique and describe the file content.
Example:
We have a node A, with uuid 2bf1afdc-6725-4d3d-96b8-9128d09d353c
which wants to share the /srv/db/application.properties
with node B
with uuid 73570beb-2d4a-43d2-8ffc-f84a6817849c
.
We want this file to stay available for one year for node B on its policy server.
The node B wants to download it into
/opt/application/etc/application.properties
.
They have to agree (i.e. it has to be defined in the policies of both
nodes) on the id of the file, that will be used during the exchange,
here it will be application.properties
.
To share the file, node A will use:
sharedfile_to_node("73570beb-2d4a-43d2-8ffc-f84a6817849c", "application.properties", "/srv/db/application.properties", "356 days")
To download the file, node B will use sharedfile_from_node with:
sharedfile_from_node("2bf1afdc-6725-4d3d-96b8-9128d09d353c", "application.properties", "/opt/application/etc/application.properties")
file_path
Path of the file to share
ttl (must match: ^(\\d+\\s*(days?|d))?(\\d+\\s*(hours?|h))?(\\d+\\s*(minutes?|m))?(\\d+\\s*(seconds?|s))?$
)
Time to keep the file on the policy server in seconds or in human readable form (see long description)
from_node [unix]
This method retrieves a file shared from another Rudder node
sharedfile(target_uuid, file_id).from_node(file_path)
This method retrieves a file shared from a Rudder node using a unique file identifier.
The file will be downloaded using native agent protocol and copied into a new file. The destination path must be the complete absolute path of the destination file.
See sharedfile_to_node for a complete example.
file_path
Where to put the file content
simplest
rudderlang simplest for a complete loop
resource simplest()
States
technique
simplest().technique()
sysctl
resource sysctl(key)
key
The key to enforce
States
value [unix]
Enforce a value in sysctl (optionally increase or decrease it)
sysctl(key).value(value, filename, option)
Enforce a value in sysctl
Behaviors
Checks for the current value defined for the given key If it is not set,
this method attempts to set it in the file defined as argument If it is
set, and corresponds to the desired value, it will success If it is set,
and does not correspond, the value will be set in the file defined,
sysctl configuration is reloaded with sysctl --system
and the
resulting value is checked. If it is not taken into account by sysctl
because its overridden in another file or its an invalid key, the method
returns an error
Prerequisite
This method requires an /etc/sysctl.d folder, and the sysctl --system
option. It does not support Debian 6 or earlier, CentOS/RHEL 6 or
earlier, SLES 11 or earlier, Ubuntu 12_04 or earlier, AIX and Solaris.
key
: the key to enforce/check value
: the expected value for the
key filename
: filename (without extension) containing the key=value
when need to be set, within /etc/sysctl.d. This method adds the correct
extension at the end of the filename Optional parameter: min
: The
value is the minimal value we request. the value is only changed if the
current value is lower than value
max
: The value is the maximal
value we request: the value is only changed if the current value is
higher than value
default
(default value): The value is strictly
enforced.
Comparison is numerical if possible, else alphanumerical So 10 > 2, but Test10 < Test2
Examples
To ensure that swappiness is disabled, and storing the configuration parameter in 99_rudder.conf
sysctl_value("vm.swappiness", "99_rudder", "0", "")
To ensure that the UDP buffer is at least 26214400
sysctl_value("net.core.rmem_max", "99_rudder", "26214400", "min")
filename
File name where to put the value in /etc/sysctl.d (without the .conf extension)
option (can be empty)
Optional modifier on value: Min, Max or Default (default value)
value
The desired value
user
resource user(user)
user
User name
States
uid [unix]
Define the uid of the user. User must already exists, uid must be non-allowed(unique).
user(user).uid(uid)
This method does not create the user.
uid
User’s uid
status [windows]
This generic method defines if user is present or absent
user(user).status(status)
status (Present
, Absent
)
Desired state for the user - can be 'Present' or 'Absent'
shell [unix]
Define the shell of the user. User must already exist.
user(user).shell(shell)
This method does not create the user. entry example: /bin/false
shell
User’s shell
primary_group [unix]
Define the primary group of the user. User must already exist.
user(user).primary_group(primary_group)
This method does not create the user.
primary_group
User’s primary group
present [windows, unix]
Ensure a user exists on the system.
user(user).present()
This method does not create the user’s home directory. Primary group will be created and set with default one, following the useradd default behavior. As in most UNIX system default behavior user creation will fail if a group with the user name already exists.
password_hash [unix]
Ensure a user’s password. Password must respect $id$salt$hashed
format
as used in the UNIX /etc/shadow file.
user(user).password_hash(password)
User must exists, password must be pre-hashed. Does not handle empty
password accounts. See UNIX /etc/shadow format. entry example:
$1$jp5rCMS4$mhvf4utonDubW5M00z0Ow0
An empty password will lead to an error and be notified.
password
User hashed password
password_clear [windows]
Ensure a user’s password. as used in the UNIX /etc/shadow file.
user(user).password_clear(password)
User must exists, password will appear in clear text in code. An empty password will lead to an error and be notified.
password
User clear password
locked [unix]
Ensure the user is locked. User must already exist.
user(user).locked()
This method does not create the user. Note that locked accounts will be marked with "!" in /etc/shadow, which is equivalent to "*". To unlock a user, apply a user_password method.
home [unix]
Define the home of the user. User must already exists.
user(user).home(home)
This method does not create the user, nor the home directory. entry example: /home/myuser The home given will be set, but not created.
home
User’s home
group [unix]
Define secondary group for a user
user(user).group(group_name)
Ensure that a user is within a group
Behavior
Ensure that the user belongs in the given secondary group (non-exclusive)
user
: the user login group_name
: secondary group name the user
should belong to (non-exclusive)
Examples
To ensure that user test
belongs in group dev
user_group("test", "dev")
Note that it will make sure that user test is in group dev, but won’t remove it from other groups it may belong to
group_name
Secondary group name for the user
fullname [unix]
Define the fullname of the user, user must already exists.
user(user).fullname(fullname)
This method does not create the user.
fullname
User’s fullname
absent [windows, unix]
Remove a user
user(user).absent()
This method ensures that a user does not exist on the system.
create [unix]
Warning
|
DEPRECATED: Please split into calls to other user_* methods: user_present user_fullname user_home user_primary_group user_shell and user_locked |
Create a user
user(user).create(description, home, group, shell, locked)
This method does not create the user’s home directory.
description
User description
group
User’s primary group
home
User’s home directory
locked
Is the user locked ? true or false
shell
User’s shell
variable
resource variable(prefix, name)
name
The variable to define, the full name will be prefix.name
prefix
The prefix of the variable name
States
string_match [unix]
Test the content of a string variable
variable(prefix, name).string_match()
Test a variable content and report a success if it matched, or an error if it does not or if the variable could not be found. Regex must respect PCRE format. Please note that this method is designed to only audit a variable state. If you want to use conditions resulting from this generic method, is it recommended to use instead condition_from_variable_match which is designed for it.
string_from_math_expression [unix]
Define a variable from a mathematical expression
variable(prefix, name).string_from_math_expression(expression, format)
To use the generated variable, you must use the form
${variable_prefix.variable_name}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
Usage
This function will evaluate a mathematical expression that may contain variables and format the result according to the provided format string.
The formatting string uses the standard POSIX printf format.
Supported mathematical expressions
All the mathematical computations are done using floats.
The supported infix mathematical syntax, in order of precedence, is:
-
(
and)
parentheses for grouping expressions -
^
operator for exponentiation -
*
and/
operators for multiplication and division -
%
operators for modulo operation -
+
and-
operators for addition and subtraction -
==
"close enough" operator to tell if two expressions evaluate to the same number, with a tiny margin to tolerate floating point errors. It returns 1 or 0. -
>=
"greater or close enough" operator with a tiny margin to tolerate floating point errors. It returns 1 or 0. -
>
"greater than" operator. It returns 1 or 0. -
⇐
"less than or close enough" operator with a tiny margin to tolerate floating point errors. It returns 1 or 0. -
<
"less than" operator. It returns 1 or 0.
The numbers can be in any format acceptable to the C scanf
function
with the %lf
format specifier, followed by the k
, m
, g
, t
, or
p
SI units. So e.g. -100
and 2.34m
are valid numbers.
In addition, the following constants are recognized:
-
e
: 2.7182818284590452354 -
log2e
: 1.4426950408889634074 -
log10e
: 0.43429448190325182765 -
ln2
: 0.69314718055994530942 -
ln10
: 2.30258509299404568402 -
pi
: 3.14159265358979323846 -
pi_2
: 1.57079632679489661923 (pi over 2) -
pi_4
: 0.78539816339744830962 (pi over 4) -
1_pi
: 0.31830988618379067154 (1 over pi) -
2_pi
: 0.63661977236758134308 (2 over pi) -
2_sqrtpi
: 1.12837916709551257390 (2 over square root of pi) -
sqrt2
: 1.41421356237309504880 (square root of 2) -
sqrt1_2
: 0.70710678118654752440 (square root of 1/2)
The following functions can be used, with parentheses:
-
ceil
andfloor
: the next highest or the previous highest integer -
log10
,log2
,log
-
sqrt
-
sin
,cos
,tan
,asin
,acos
,atan
-
abs
: absolute value -
step
: 0 if the argument is negative, 1 otherwise
Formatting options
The format field supports the following specifiers:
-
%d
for decimal integer -
%x
for hexadecimal integer -
%o
for octal integer -
%f
for decimal floating point
You can use usual flags, width and precision syntax.
Examples
If you use:
variable_string("prefix", "var", "10"); variable_string_from_math_expression("prefix", "sum", "2.0+3.0", "%d"); variable_string_from_math_expression("prefix", "product", "3*${prefix.var}", "%d");
The prefix.sum
string variable will contain 5
and prefix.product
will contain 30
.
expression
The mathematical expression to evaluate
format
The format string to use
string_from_file [windows, unix]
Define a variable from a file content
variable(prefix, name).string_from_file(file_name)
To use the generated variable, you must use the form
${variable_prefix.variable_name}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
file_name
The path of the file
string_from_command [windows, unix]
Define a variable from a command output
variable(prefix, name).string_from_command(command)
To use the generated variable, you must use the form
${variable_prefix.variable_name}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
command
The command to execute
string_from_augeas []
Use Augeas binaries to call Augtool commands and options to get a node label’s value.
variable(prefix, name).string_from_augeas(path, lens, file)
Augeas is a tool that provides an abstraction layer for all the complexities that turn around editing files with regular expressions. It’s a tree based hierarchy tool, that handle system configuration files where you can securely modify your files. To do so you have to provide the path to the node label’s value. Augeas uses lenses which are sort of modules that are in charge of identifying and converting files into tree and back. This way, you can manipulate at first the tree and then save your changes into the configuration files on the actual system.
In this method, we introduce using augtool commands and options in order to get the value of a given node’s label in the parameters, this can be done by specifying the path to it. The method has in total 5 parameters: variable prefix, variable name, path, lens and file.
Actually there is two ways you can use this method, either you simply provide a path to the node’s label as a parameter or you specify a file associated with a lens, then you put the regular path. When you only specify the path to the node’s label, your request will includes by default Augeas charging all the lenses and files, on the other hand, if you have a specific file, for example, such as a Json file and you want to associate it to the existing Json lens, then in that case, you need to fill in addition the file and the lens parameter, this way Augeas won’t load all its files and lenses except the ones you have specified. The generic method will get a node label’s value from the agent, otherwise, if Augeas isn’t installed on the agent, it will produces an error.
With autoload
Let’s consider that you want to obtain the value of the ip address of
the first line in the /etc/hosts
file by indicating the path to it.
(Note that the label
and value
parameters mentioned are naming
examples of variable prefix
and variable name
, the path
/etc/hosts/1/ipaddr
represents the ipaddr
node label’s value in the
first line in the /etc/hosts
).
variable_string_from_augeas("label","value","/etc/hosts/1/ipaddr", "", "");
Without autoload
The second case includes two needs of using it, either you want to
prevent Augeas from charging all lenses and files while executing your
request or you want to associate the Hosts
lens with the /etc/hosts
file then get for example the same node value of the first use case.
variable_string_from_augeas("label","value","/etc/hosts/1/ipaddr","Hosts","/etc/hosts");
file (can be empty)
The file specified by the user in case he wants to load a specified file associated with its lens
lens (can be empty)
The lens specified by the user in case he wants to load a specified lens associated with its file
path
The path to the file and node label
string_default [unix]
Define a variable from another variable name, with a default value if undefined
variable(prefix, name).string_default(source_variable, default_value)
To use the generated variable, you must use the form
${variable_prefix.variable_name}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
default_value
The default value to use if source_variable is not defined
source_variable
The source variable name
string [windows, unix]
Define a variable from a string parameter
variable(prefix, name).string(value)
To use the generated variable, you must use the form
${variable_prefix.variable_name}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
value
The variable content
iterator_from_file [unix]
Define a variable that will be automatically iterated over
variable(prefix, name).iterator_from_file(file_name, separator_regex, comments_regex)
The generated variable is a special variable that is automatically iterated over. When you call a generic method with this variable as a parameter, n calls will be made, one for each items of the variable. Note: there is a limit of 10000 items Note: empty items are ignored
To use the generated variable, you must use the form
${variable_prefix.variable_name}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
comments_regex
Regular expression that is used to remove comments ( usually: \s*#.*?(?=\n) )
file_name
The path to the file
separator_regex
Regular expression that is used to split the value into items ( usually: \n )
iterator [unix]
Define a variable that will be automatically iterated over
variable(prefix, name).iterator(value, separator)
The generated variable is a special variable that is automatically iterated over. When you call a generic method with this variable as a parameter, n calls will be made, one for each items of the variable. Note: there is a limit of 10000 items
To use the generated variable, you must use the form
${variable_prefix.variable_name}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
separator
Regular expression that is used to split the value into items ( usually: , )
value
The variable content
dict_merge_tolerant [unix]
Define a variable resulting of the merge of two other variables, allowing merging undefined variables
variable(prefix, name).dict_merge_tolerant(first_variable, second_variable)
To use the generated variable, you must use the form
${variable_prefix.variable_name[key]}
with each name replaced with the
parameters of this method.
See variable_dict_merge for usage documentation. The only difference is that this method will not fail if one of the variables do not exist, and will return the other one. If both are undefined, the method will still fail.
first_variable
The first variable, which content will be overridden in the resulting variable if necessary (written in the form variable_prefix.variable_name)
second_variable
The second variable, which content will override the first in the resulting variable if necessary (written in the form variable_prefix.variable_name)
dict_merge [windows, unix]
Define a variable resulting of the merge of two other variables
variable(prefix, name).dict_merge(first_variable, second_variable)
To use the generated variable, you must use the form
${variable_prefix.variable_name[key]}
with each name replaced with the
parameters of this method.
The resulting variable will be the merge of the two parameters, which means it is built by:
-
Taking the content of the first variable
-
Adding the content of the second variable, and replacing the keys that were already there
It is only a one-level merge, and the value of the first-level key will be completely replaced by the merge.
This method will fail if one of the variables is not defined. See variable_dict_merge_tolerant if you want to allow one of the variables not to be defined.
Usage
If you have a prefix.variable1
variable defined by:
{ "key1": "value1", "key2": "value2", "key3": { "keyx": "valuex" } }
And a prefix.variable2
variable defined by:
{ "key1": "different", "key3": "value3", "key4": "value4" }
And that you use:
variablr_dict_merge("prefix", "variable3, "prefix.variable1", "prefix.variable2")
You will get a prefix.variable3
variable containing:
{
"key1": "different",
"key2": "value2",
"key3": "value3",
"key4": "value4"
}
first_variable
The first variable, which content will be overridden in the resulting variable if necessary (written in the form variable_prefix.variable_name)
second_variable
The second variable, which content will override the first in the resulting variable if necessary (written in the form variable_prefix.variable_name)
dict_from_osquery [unix]
Define a variable that contains key,value pairs (a dictionary) from an osquery query
variable(prefix, name).dict_from_osquery(query)
To use the generated variable, you must use the form
${variable_prefix.variable_name[key]}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
This method will define a dict variable from the output of an osquery query. The query will be executed at every agent run, and its result will be usable as a standard dict variable.
Setup
This method requires the presence of osquery on the target nodes. It won’t install it automatically. Check the correct way of doing so for your OS.
Building queries
To learn about the possible queries, read the osquery schema for your osquery version.
You can test the queries before using them with the osqueryi
command,
see the example below.
Examples
# To get the number of cpus on the machine variable_dict_from_osquery("prefix", "var1", "select cpu_logical_cores from system_info;");
It will produce the dict from the output of:
osqueryi --json "select cpu_logical_cores from system_info;"
Hence something like:
[
{"cpu_logical_cores":"8"}
]
To access this value, use the ${prefix.var1[0][cpu_logical_cores]}
syntax.
query
The query to execute (ending with a semicolon)
dict_from_file_type [unix]
Define a variable that contains key,value pairs (a dictionary) from a JSON, CSV or YAML file
variable(prefix, name).dict_from_file_type(file_name, file_type)
To use the generated variable, you must use the form
${variable_prefix.variable_name[key]}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive).
This method will load data from various file formats (yaml, json, csv).
CSV parsing
The input file must use CRLF as line delimiter to be readable (as stated in RFC 4180).
Examples
# To read a json file with format auto detection variable_dict_from_file_type("prefix", "var", "/tmp/file.json", ""); # To force yaml reading on a non file without yaml extension variable_dict_from_file_type("prefix", "var", "/tmp/file", "YAML");
If /tmp/file.json
contains:
{
"key1": "value1"
}
You will be able to access the value1
value with
${prefix.var[key1]}
.
file_name
The file name to load data from
file_type (empty, auto
, JSON
, YAML
, CSV
)
The file type, can be "JSON", "CSV", "YAML" or "auto" for auto detection based on file extension, with a fallback to JSON (default is "auto")
dict_from_file [windows, unix]
Define a variable that contains key,value pairs (a dictionary) from a JSON file
variable(prefix, name).dict_from_file(file_name)
To use the generated variable, you must use the form
${variable_prefix.variable_name[key]}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
See variable_dict_from_file_type for complete documentation.
file_name
The absolute local file name with JSON content
dict [windows, unix]
Define a variable that contains key,value pairs (a dictionary)
variable(prefix, name).dict(value)
To use the generated variable, you must use the form
${variable_prefix.variable_name[key]}
with each name replaced with the
parameters of this method.
Be careful that using a global variable can lead to unpredictable content in case of multiple definition, which is implicitly the case when a technique has more than one instance (directive). Please note that only global variables are available within templates.
value
The variable content in JSON format
windows
resource windows(hotfix)
hotfix
Windows hotfix name (ex: KB4033369)
States
hotfix_present [windows]
Ensure that a specific windows hotfix is present from the system.
windows(hotfix).hotfix_present(package_path)
Ensure that a specific windows hotfix is present from the system.
package_path
Windows hotfix package absolute path, can be a .msu archive or a .cab file
hotfix_absent [windows]
Ensure that a specific windows hotfix is absent from the system.
windows(hotfix).hotfix_absent()
Ensure that a specific windows hotfix is absent from the system.
component_present [windows]
Ensure that a specific windows component is present on the system.
windows(hotfix).component_present()
Ensure that a specific windows component is present on the system.
component_absent [windows]
Ensure that a specific windows component is absent from the system.
windows(hotfix).component_absent()
Ensure that a specific windows component is absent from the system.