Drivers

Cloud management

universal [CloudManagement]

Universal cloud management driver

This driver is suitable for the most abstract (and thus universal) case. The driver does not have any built-in services nor node discovery capabilities. All services need to be listed explicitly in a config file. Node list is specified using node_list node discovery driver.

Example of multi-node configuration:

Note that in this configuration a node discovery driver is required.

cloud_management:
  driver: universal

node_discover:
  driver: node_list
  args:
    - ip: 192.168.5.149
      auth:
        username: developer
        private_key_file: cloud_key
        become_password: my_secret_password
    - ip: 192.168.5.150
      auth:
        username: developer
        private_key_file: cloud_key
        become_password: my_secret_password

devstack [CloudManagement, NodeDiscover]

Driver for DevStack.

This driver requires DevStack installed with Systemd (USE_SCREEN=False). Supports discovering of node MAC addresses.

Example configuration:

cloud_management:
  driver: devstack
  args:
    address: 192.168.1.10
    auth:
      username: ubuntu
      password: ubuntu_pass
      private_key_file: ~/.ssh/id_rsa_devstack
    iface: eth1

parameters:

  • address - ip address of any devstack node
  • username - username for all nodes
  • password - password for all nodes (optional)
  • private_key_file - path to key file (optional)
  • iface - network interface name to retrieve mac address (optional)

Default services:

  • cinder-api
  • cinder-scheduler
  • cinder-volume
  • etcd
  • glance-api
  • heat-api
  • heat-engine
  • keystone
  • memcached
  • mysql
  • neutron-dhcp-agent
  • neutron-l3-agent
  • neutron-meta-agent
  • neutron-openvswitch-agent
  • neutron-server
  • nova-api
  • nova-compute
  • nova-scheduler
  • placement-api
  • rabbitmq

saltcloud [CloudManagement, NodeDiscover]

Driver for OpenStack cloud managed by Salt.

Supports discovering of slave nodes.

Example configuration:

cloud_management:
  driver: saltcloud
  args:
    address: 192.168.1.10
    auth:
      username: root
      password: root_pass
      private_key_file: ~/.ssh/id_rsa_tcpcloud
    slave_auth:
      username: ubuntu
      password: ubuntu_pass
      become_username: root
    slave_name_regexp: ^(?!cfg|mon)
    slave_direct_ssh: True
    get_ips_cmd: pillar.get _param:single_address

parameters:

  • address - ip address of salt config node
  • username - username for salt config node
  • password - password for salt config node (optional)
  • private_key_file - path to key file (optional)
  • slave_username - username for salt minions (optional) username will be used if slave_username not specified
  • slave_password - password for salt minions (optional) password will be used if slave_password not specified
  • master_sudo - Use sudo on salt config node (optional)
  • slave_sudo - Use sudo on salt minion nodes (optional)
  • slave_name_regexp - regexp for minion FQDNs (optional)
  • slave_direct_ssh - if False then salt master is used as ssh proxy (optional)
  • get_ips_cmd - salt command to get IPs of minions (optional)
  • serial - how many hosts Ansible should manage at a single time. (optional) default: 10

Default services:

  • cinder-api
  • cinder-backup
  • cinder-scheduler
  • cinder-volume
  • elasticsearch
  • glance-api
  • glance-glare
  • glance-registry
  • grafana-server
  • heat-api
  • heat-engine
  • horizon
  • influxdb
  • keystone
  • kibana
  • memcached
  • mysql
  • nagios3
  • neutron-dhcp-agent
  • neutron-l3-agent
  • neutron-metadata-agent
  • neutron-openvswitch-agent
  • neutron-server
  • nova-api
  • nova-cert
  • nova-compute
  • nova-conductor
  • nova-consoleauth
  • nova-novncproxy
  • nova-scheduler
  • rabbitmq

Power management

libvirt [PowerDriver]

Libvirt driver.

Example configuration:

power_managements:
- driver: libvirt
  args:
    connection_uri: qemu+unix:///system

parameters:

  • connection_uri - libvirt uri

Note that Libvirt domain name should be specified as node attribute. Refer to node discover (node_list driver) for details.

ipmi [PowerDriver]

IPMI driver.

Example configuration:

power_managements:
- driver: ipmi
  args:
    mac_to_bmc:
      aa:bb:cc:dd:ee:01:
        address: 170.0.10.50
        username: admin1
        password: Admin_123
      aa:bb:cc:dd:ee:02:
        address: 170.0.10.51
        username: admin2
        password: Admin_123
    fqdn_to_bmc:
      node3.local:
        address: 170.0.10.52
        username: admin1
        password: Admin_123

parameters:

  • mac_to_bmc - list of dicts where keys are the node MACs and values are the corresponding BMC configurations with the folowing fields:
    • address - ip address of IPMI server
    • username - IPMI user
    • password - IPMI password

Node discover

node_list [NodeDiscover]

Node list.

Allows specifying list of nodes in configuration.

Example configuration:

node_discover:
  driver: node_list
  args:
  - ip: 10.0.0.51
    mac: aa:bb:cc:dd:ee:01
    fqdn: node1.local
    libvirt_name: node1
  - ip: 192.168.1.50
    mac: aa:bb:cc:dd:ee:02
    fqdn: node2.local
    auth:
      username: user1
      password: secret1
      jump:
        host: 10.0.0.52
        username: ubuntu
        private_key_file: /path/to/file
  - ip: 10.0.0.53
    mac: aa:bb:cc:dd:ee:03
    fqdn: node3.local
    become_password: my_secret_password

node parameters:

  • ip - ip/host of the node
  • mac - MAC address of the node (optional). MAC address is used for libvirt driver.
  • fqdn - FQDN of the node (optional). FQDN is used for filtering only.
  • libvirt_name - Libvirt domain name (optional).
  • auth - SSH related parameters (optional):
    • username - SSH username (optional)
    • password - SSH password (optional)
    • private_key_file - SSH key file (optional)
    • become_password - privilege escalation password (optional)
    • jump - SSH proxy parameters (optional):
      • host - SSH proxy host
      • username - SSH proxy user
      • private_key_file - SSH proxy key file (optional)

Service drivers

process [Service]

Service as process

“process” is a basic service driver that uses ps and kill in actions like kill / freeze / unfreeze. Commands for start / restart / terminate should be specified in configuration, otherwise the commands will fail at runtime.

Example configuration:

services:
  app:
    driver: process
    args:
      grep: my_app
      restart_cmd: /bin/my_app --restart
      terminate_cmd: /bin/stop_my_app
      start_cmd: /bin/my_app
      port: ['tcp', 4242, 'ingress']

parameters:

  • grep - regexp for grep to find process PID
  • restart_cmd - command to restart service (optional)
  • terminate_cmd - command to terminate service (optional)
  • start_cmd - command to start service (optional)
  • port - tuple with two or three values - protocol, port number, direction (optional)

Note that network operations are based on iptables. They are applied to the whole host and not restricted to a single process.

system_service [ServiceAsProcess]

System service

This is universal driver for any system services supported by Ansible (e.g. systemd, upstart). Please refer to Ansible documentation http://docs.ansible.com/ansible/latest/service_module.html for the whole list.

Example configuration:

services:
  app:
    driver: system_service
    args:
      service_name: app
      grep: my_app
      port: ['tcp', 4242, 'ingress']

parameters:

  • service_name - name of a service
  • grep - regexp for grep to find process PID
  • port - tuple with two or three values - protocol, port number, direction (optional)

salt_service [ServiceAsProcess]

Salt service

Service that can be controlled by salt service.* commands.

Example configuration:

services:
  app:
    driver: salt_service
    args:
      salt_service: app
      grep: my_app
      port: ['tcp', 4242, 'egress']

parameters:

  • salt_service - name of a service
  • grep - regexp for grep to find process PID
  • port - tuple with two or three values - protocol, port number, direction (optional)

Container drivers

docker_container [Container]

Docker container

This is docker container driver for any docker containers supported by Ansible. Please refer to Ansible documentation https://docs.ansible.com/ansible/latest/modules/docker_container_module.html for the whole list.

Example configuration:

containers:
  app:
    driver: docker_container
    args:
      container_name: app

parameters:

  • container_name - name of the container