Command Line Overview

ganeti-64.png

Using the cli.

© 2010-2011 Google
Use under GPLv2+ or CC-by-SA
Some images borrowed/modified from Lance Albertson and Iustin Pop

Ganeti Commands...

General command format

gnt-[subsystem] [verb] --flags --flags [noun]

Three ways to get help

Help

# gnt-instance list --help
Usage
=====
  gnt-instance list [<instance>...]

Lists the instances and their status. The available
fields can be shown using the "list-fields" command
(see the man page for details). The default field list
is (in order): name, hypervisor, os, pnode, status,
oper_ram.

Options
=======
--no-headers            Don't display column headers
--separator=SEPARATOR   Separator between output fields ...
...

Commands run on the master

Commands must run on the master. Any other node will give you a friendly message. Scripts cna use "getmaster" to know the right place.

# gnt-node list
Failure: prerequisites not met for this operation:
This is not the master node, please connect to node
'gnta2.example.com' and rerun the command
# gnt-cluster getmaster
gnta2.example.com
# ssh gnta2.example.com

WARNING:
This machine is part of a ganeti cluster.

# gnt-node list
Node              DTotal DFree MTotal MNode MFree Pinst Sinst
gnta1.example.com   3.6T  3.1T  64.0G 1023M 15.0G     5     3
...etc...

gnt-cluster

Cluster wide operations:

gnt-cluster info
gnt-cluster modify [-B/H/N ...]
gnt-cluster verify
gnt-cluster master-failover
gnt-cluster command ...
gnt-cluster copyfile ...

gnt-cluster example

# gnt-cluster verify
Submitted jobs 285450, 285451
Waiting for job 285450 ...
Sat Oct 27 19:14:08 2012 * Verifying cluster config
Sat Oct 27 19:14:08 2012 * Verifying cluster certificate files
Sat Oct 27 19:14:08 2012 * Verifying hypervisor parameters
Sat Oct 27 19:14:08 2012 * Verifying all nodes belong to an existing group
Waiting for job 285451 ...
Sat Oct 27 19:14:08 2012 * Verifying group 'default'
Sat Oct 27 19:14:08 2012 * Gathering data (3 nodes)
Sat Oct 27 19:14:10 2012 * Gathering disk information (3 nodes)
Sat Oct 27 19:14:11 2012 * Verifying configuration file consistency
Sat Oct 27 19:14:11 2012 * Verifying node status
Sat Oct 27 19:14:11 2012 * Verifying instance status
Sat Oct 27 19:14:11 2012 * Verifying orphan volumes
Sat Oct 27 19:14:11 2012 * Verifying N+1 Memory redundancy
Sat Oct 27 19:14:11 2012 * Other Notes
Sat Oct 27 19:14:11 2012   - NOTICE: 1 offline node(s) found.
Sat Oct 27 19:14:12 2012 * Hooks Results

gnt-node

Per node operations:

gnt-node remove node4
gnt-node modify \
  [ --master-candidate yes|no ] \
  [ --drained yes|no ] \
  [ --offline yes|no ] node2
gnt-node evacuate/failover/migrate
gnt-node powercycle

gnt-node examples

# gnt-node list
Node              DTotal DFree MTotal MNode MFree Pinst Sinst
gnta1.example.com   3.6T  3.1T  64.0G 1023M 15.0G     5     3
gnta2.example.com   3.6T  3.1T  64.0G 1023M 22.9G     4     4
gnta3.example.com      *     *      *     *     *     0     0
gnta4.example.com   3.6T  3.1T  64.0G 1023M 21.0G     4     6

# gnt-node info gnta1
Node name: gnta1.example.com
  primary ip: 172.15.155.15
  secondary ip: 172.99.199.1
      ...etc...
  primary for instances:
    - ginny.example.com
      ...etc...
  secondary for instances:
    - webcsi.example.com
      ...etc...
  node parameters:
    - oob_program: default (None)
    - spindle_count: default (1)
      ...etc...

gnt-instance

Instance operations:

gnt-instance start/stop i0
gnt-instance modify ... i0
gnt-instance info i0
gnt-instance migrate i0
gnt-instance console i0

gnt-instance examples

# gnt-instance list
Instance             Hypervisor OS            Primary_node      Status  Memory
rocker1.example.com  xen-pvm    debian-server gnta2.example.com running   512M
webcsi.example.com   xen-pvm    debian-server gnta3.example.com running   1.0G

# gnt-instance info rocker1
Instance name: rocker1.example.com
UUID: 3244567d-a08a-4663-8349-c68307fab664
Serial number: 2
Creation time: 2012-07-05 20:08:14
Modification time: 2012-07-09 15:33:03
State: configured to be up, actual state is up
  Nodes:
    - primary: gnta2.example.com
    - secondaries: gnta3.example.com
  Operating system: debian-server
  Allocated network port: None
  Hypervisor: xen-pvm
    - blockdev_prefix: default (sd)
    - bootloader_args: default ()
    - bootloader_path: default ()
    - cpu_mask: default (all)
...etc...

Job Queue

gnt-job list
gnt-job info
gnt-job watch
gnt-job cancel

hbal

A utility for balancing a cluster:

gnt-backup

Manage instance exports/backups:

gnt-backup export -n node1 web
gnt-backup import -t plain \
  {-n node3 | -I hail } --src-node node1 \
  --src-dir /tmp/myexport web
gnt-backup list
gnt-backup remove

gnt-group

Managing node groups:

gnt-group add
gnt-group assign-nodes
gnt-group evacuate
gnt-group list
gnt-group modify
gnt-group remove
gnt-group rename
gnt-instance change-group

hspace

Calculate free space on a cluster, depending on instace policy.

hspace -L --disk-template ...

How many more instances can I fit? What will be the resource that runs out, then?

Custom output

What are the -o fields?

Filtering a list

More on filtering

Conclusion

Questions?

© 2010-2011 Google
Use under GPLv2+ or CC-by-SA
Some images borrowed/modified from Lance Albertson and Iustin Pop
cc-by-sa.png