Product SiteDocumentation Site

Apache CloudStack 4.1.0

CloudStack Complete Documentation

Edition 1

Apache CloudStack


Legal Notice

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Apache CloudStack is an effort undergoing incubation at The Apache Software Foundation (ASF).
Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
Abstract
Complete documentation for CloudStack.

1. Concepts
1.1. What Is CloudStack?
1.2. What Can CloudStack Do?
1.3. Deployment Architecture Overview
1.3.1. Management Server Overview
1.3.2. Cloud Infrastructure Overview
1.3.3. Networking Overview
2. Cloud Infrastructure Concepts
2.1. About Regions
2.2. About Zones
2.3. About Pods
2.4. About Clusters
2.5. About Hosts
2.6. About Primary Storage
2.7. About Secondary Storage
2.8. About Physical Networks
2.8.1. Basic Zone Network Traffic Types
2.8.2. Basic Zone Guest IP Addresses
2.8.3. Advanced Zone Network Traffic Types
2.8.4. Advanced Zone Guest IP Addresses
2.8.5. Advanced Zone Public IP Addresses
2.8.6. System Reserved IP Addresses
3. Installation
3.1. Who Should Read This
3.2. Overview of Installation Steps
3.3. Minimum System Requirements
3.3.1. Management Server, Database, and Storage System Requirements
3.3.2. Host/Hypervisor System Requirements
3.4. Configure package repository
3.4.1. DEB package repository
3.4.2. RPM package repository
3.5. Management Server Installation
3.5.1. Management Server Installation Overview
3.5.2. Prepare the Operating System
3.5.3. Install the Management Server on the First Host
3.5.4. Install the database server
3.5.5. About Password and Key Encryption
3.5.6. Prepare NFS Shares
3.5.7. Prepare and Start Additional Management Servers
3.5.8. Prepare the System VM Template
3.5.9. Installation Complete! Next Steps
3.6. Building RPMs from Source
3.6.1. Generating RPMS
3.7. Building DEB packages
3.7.1. Setting up an APT repo
3.7.2. Configuring your machines to use the APT repository
3.8. Prerequisites for building Apache CloudStack
4. User Interface
4.1. Log In to the UI
4.1.1. End User's UI Overview
4.1.2. Root Administrator's UI Overview
4.1.3. Logging In as the Root Administrator
4.1.4. Changing the Root Password
4.2. Using SSH Keys for Authentication
4.2.1. Creating an Instance Template that Supports SSH Keys
4.2.2. Creating the SSH Keypair
4.2.3. Creating an Instance
4.2.4. Logging In Using the SSH Keypair
4.2.5. Resetting SSH Keys
5. Steps to Provisioning Your Cloud Infrastructure
5.1. Overview of Provisioning Steps
5.2. Adding Regions (optional)
5.2.1. The First Region: The Default Region
5.2.2. Adding a Region
5.2.3. Adding Third and Subsequent Regions
5.2.4. Deleting a Region
5.3. Adding a Zone
5.3.1. Basic Zone Configuration
5.3.2. Advanced Zone Configuration
5.4. Adding a Pod
5.5. Adding a Cluster
5.5.1. Add Cluster: KVM or XenServer
5.5.2. Add Cluster: vSphere
5.6. Adding a Host
5.6.1. Adding a Host (XenServer or KVM)
5.6.2. Adding a Host (vSphere)
5.7. Add Primary Storage
5.7.1. System Requirements for Primary Storage
5.7.2. Adding Primary Stroage
5.8. Add Secondary Storage
5.8.1. System Requirements for Secondary Storage
5.8.2. Adding Secondary Storage
5.9. Initialize and Test
6. Global Configuration Parameters
6.1. Setting Global Configuration Parameters
6.2. About Global Configuration Parameters
7. Hypervisor Installation
7.1. KVM Hypervisor Host Installation
7.1.1. System Requirements for KVM Hypervisor Hosts
7.1.2. KVM Installation Overview
7.1.3. Prepare the Operating System
7.1.4. Install and configure the Agent
7.1.5. Install and Configure libvirt
7.1.6. Configure the Security Policies
7.1.7. Configure the network bridges
7.1.8. Configure the network using OpenVswitch
7.1.9. Configuring the firewall
7.1.10. Add the host to CloudStack
7.2. Citrix XenServer Installation for CloudStack
7.2.1. System Requirements for XenServer Hosts
7.2.2. XenServer Installation Steps
7.2.3. Configure XenServer dom0 Memory
7.2.4. Username and Password
7.2.5. Time Synchronization
7.2.6. Licensing
7.2.7. Install CloudStack XenServer Support Package (CSP)
7.2.8. Primary Storage Setup for XenServer
7.2.9. iSCSI Multipath Setup for XenServer (Optional)
7.2.10. Physical Networking Setup for XenServer
7.2.11. Upgrading XenServer Versions
7.3. VMware vSphere Installation and Configuration
7.3.1. System Requirements for vSphere Hosts
7.3.2. Preparation Checklist for VMware
7.3.3. vSphere Installation Steps
7.3.4. ESXi Host setup
7.3.5. Physical Host Networking
7.3.6. Storage Preparation for vSphere (iSCSI only)
7.3.7. Add Hosts or Configure Clusters (vSphere)
7.3.8. Applying Hotfixes to a VMware vSphere Host
8. Additional Installation Options
8.1. Installing the Usage Server (Optional)
8.1.1. Requirements for Installing the Usage Server
8.1.2. Steps to Install the Usage Server
8.2. SSL (Optional)
8.3. Database Replication (Optional)
8.3.1. Failover
9. Choosing a Deployment Architecture
9.1. Small-Scale Deployment
9.2. Large-Scale Redundant Setup
9.3. Separate Storage Network
9.4. Multi-Node Management Server
9.5. Multi-Site Deployment
10. Accounts
10.1. Accounts, Users, and Domains
10.2. Using an LDAP Server for User Authentication
10.2.1. Example LDAP Configuration Commands
10.2.2. Search Base
10.2.3. Query Filter
10.2.4. Search User Bind DN
10.2.5. SSL Keystore Path and Password
11. User Services Overview
11.1. Service Offerings, Disk Offerings, Network Offerings, and Templates
12. Using Projects to Organize Users and Resources
12.1. Overview of Projects
12.2. Configuring Projects
12.2.1. Setting Up Invitations
12.2.2. Setting Resource Limits for Projects
12.2.3. Setting Project Creator Permissions
12.3. Creating a New Project
12.4. Adding Members to a Project
12.4.1. Sending Project Membership Invitations
12.4.2. Adding Project Members From the UI
12.5. Accepting a Membership Invitation
12.6. Suspending or Deleting a Project
12.7. Using the Project View
13. Service Offerings
13.1. Compute and Disk Service Offerings
13.1.1. Creating a New Compute Offering
13.1.2. Creating a New Disk Offering
13.1.3. Modifying or Deleting a Service Offering
13.2. System Service Offerings
13.2.1. Creating a New System Service Offering
13.3. Network Throttling
13.4. Changing the Default System Offering for System VMs
14. Setting Up Networking for Users
14.1. Overview of Setting Up Networking for Users
14.2. About Virtual Networks
14.2.1. Isolated Networks
14.2.2. Shared Networks
14.2.3. Runtime Allocation of Virtual Network Resources
14.3. Network Service Providers
14.4. Network Offerings
14.4.1. Creating a New Network Offering
15. Working With Virtual Machines
15.1. About Working with Virtual Machines
15.2. Best Practices for Virtual Machines
15.3. VM Lifecycle
15.4. Creating VMs
15.5. Accessing VMs
15.6. Stopping and Starting VMs
15.7. Changing the VM Name, OS, or Group
15.8. Changing the Service Offering for a VM
15.9. Moving VMs Between Hosts (Manual Live Migration)
15.10. Deleting VMs
15.11. Working with ISOs
15.11.1. Adding an ISO
15.11.2. Attaching an ISO to a VM
16. Working With Hosts
16.1. Adding Hosts
16.2. Scheduled Maintenance and Maintenance Mode for Hosts
16.2.1. vCenter and Maintenance Mode
16.2.2. XenServer and Maintenance Mode
16.3. Disabling and Enabling Zones, Pods, and Clusters
16.4. Removing Hosts
16.4.1. Removing XenServer and KVM Hosts
16.4.2. Removing vSphere Hosts
16.5. Re-Installing Hosts
16.6. Maintaining Hypervisors on Hosts
16.7. Changing Host Password
16.8. Host Allocation
16.8.1. Over-Provisioning and Service Offering Limits
16.9. VLAN Provisioning
17. Working with Templates
17.1. Creating Templates: Overview
17.2. Requirements for Templates
17.3. Best Practices for Templates
17.4. The Default Template
17.5. Private and Public Templates
17.6. Creating a Template from an Existing Virtual Machine
17.7. Creating a Template from a Snapshot
17.8. Uploading Templates
17.9. Exporting Templates
17.10. Creating a Windows Template
17.10.1. System Preparation for Windows Server 2008 R2
17.10.2. System Preparation for Windows Server 2003 R2
17.11. Importing Amazon Machine Images
17.12. Converting a Hyper-V VM to a Template
17.13. Adding Password Management to Your Templates
17.13.1. Linux OS Installation
17.13.2. Windows OS Installation
17.14. Deleting Templates
18. Working With Storage
18.1. Storage Overview
18.2. Primary Storage
18.2.1. Best Practices for Primary Storage
18.2.2. Runtime Behavior of Primary Storage
18.2.3. Hypervisor Support for Primary Storage
18.2.4. Storage Tags
18.2.5. Maintenance Mode for Primary Storage
18.3. Secondary Storage
18.4. Working With Volumes
18.4.1. Creating a New Volume
18.4.2. Uploading an Existing Volume to a Virtual Machine
18.4.3. Attaching a Volume
18.4.4. Detaching and Moving Volumes
18.4.5. VM Storage Migration
18.4.6. Resizing Volumes
18.4.7. Volume Deletion and Garbage Collection
18.5. Working with Snapshots
18.5.1. Snapshot Job Throttling
18.5.2. Automatic Snapshot Creation and Retention
18.5.3. Incremental Snapshots and Backup
18.5.4. Volume Status
18.5.5. Snapshot Restore
19. Working with Usage
19.1. Configuring the Usage Server
19.2. Setting Usage Limits
19.3. Globally Configured Limits
19.4. Default Account Resource Limits
19.5. Per-Domain Limits
20. Managing Networks and Traffic
20.1. Guest Traffic
20.2. Networking in a Pod
20.3. Networking in a Zone
20.4. Basic Zone Physical Network Configuration
20.5. Advanced Zone Physical Network Configuration
20.5.1. Configure Guest Traffic in an Advanced Zone
20.5.2. Configure Public Traffic in an Advanced Zone
20.6. Using Multiple Guest Networks
20.6.1. Adding an Additional Guest Network
20.6.2. Changing the Network Offering on a Guest Network
20.7. Security Groups
20.7.1. About Security Groups
20.7.2. Adding a Security Group
20.7.3. Security Groups in Advanced Zones (KVM Only)
20.7.4. Enabling Security Groups
20.7.5. Adding Ingress and Egress Rules to a Security Group
20.8. External Firewalls and Load Balancers
20.8.1. About Using a NetScaler Load Balancer
20.8.2. Configuring SNMP Community String on a RHEL Server
20.8.3. Initial Setup of External Firewalls and Load Balancers
20.8.4. Ongoing Configuration of External Firewalls and Load Balancers
20.8.5. Configuring AutoScale
20.9. Load Balancer Rules
20.9.1. Adding a Load Balancer Rule
20.9.2. Sticky Session Policies for Load Balancer Rules
20.10. Guest IP Ranges
20.11. Acquiring a New IP Address
20.12. Releasing an IP Address
20.13. Static NAT
20.13.1. Enabling or Disabling Static NAT
20.14. IP Forwarding and Firewalling
20.14.1. Creating Egress Firewall Rules in an Advanced Zone
20.14.2. Firewall Rules
20.14.3. Port Forwarding
20.15. IP Load Balancing
20.16. DNS and DHCP
20.17. VPN
20.17.1. Configuring VPN
20.17.2. Using VPN with Windows
20.17.3. Using VPN with Mac OS X
20.17.4. Setting Up a Site-to-Site VPN Connection
20.18. About Inter-VLAN Routing
20.19. Configuring a Virtual Private Cloud
20.19.1. About Virtual Private Clouds
20.19.2. Adding a Virtual Private Cloud
20.19.3. Adding Tiers
20.19.4. Configuring Access Control List
20.19.5. Adding a Private Gateway to a VPC
20.19.6. Deploying VMs to the Tier
20.19.7. Acquiring a New IP Address for a VPC
20.19.8. Releasing an IP Address Alloted to a VPC
20.19.9. Enabling or Disabling Static NAT on a VPC
20.19.10. Adding Load Balancing Rules on a VPC
20.19.11. Adding a Port Forwarding Rule on a VPC
20.19.12. Removing Tiers
20.19.13. Editing, Restarting, and Removing a Virtual Private Cloud
20.20. Persistent Networks
20.20.1. Persistent Network Considerations
20.20.2. Creating a Persistent Guest Network
21. Working with System Virtual Machines
21.1. The System VM Template
21.2. Multiple System VM Support for VMware
21.3. Console Proxy
21.3.1. Using a SSL Certificate for the Console Proxy
21.3.2. Changing the Console Proxy SSL Certificate and Domain
21.4. Virtual Router
21.4.1. Configuring the Virtual Router
21.4.2. Upgrading a Virtual Router with System Service Offerings
21.4.3. Best Practices for Virtual Routers
21.5. Secondary Storage VM
22. System Reliability and High Availability
22.1. HA for Management Server
22.2. Management Server Load Balancing
22.3. HA-Enabled Virtual Machines
22.4. HA for Hosts
22.4.1. Dedicated HA Hosts
22.5. Primary Storage Outage and Data Loss
22.6. Secondary Storage Outage and Data Loss
23. Managing the Cloud
23.1. Using Tags to Organize Resources in the Cloud
23.2. Changing the Database Configuration
23.3. Changing the Database Password
23.4. Administrator Alerts
23.5. Customizing the Network Domain Name
23.6. Stopping and Restarting the Management Server
24. CloudStack API
24.1. Provisioning and Authentication API
24.2. Allocators
24.3. User Data and Meta Data
25. Tuning
25.1. Performance Monitoring
25.2. Increase Management Server Maximum Memory
25.3. Set Database Buffer Pool Size
25.4. Set and Monitor Total VM Limits per Host
25.5. Configure XenServer dom0 Memory
26. Troubleshooting
26.1. Events
26.1.1. Event Logs
26.1.2. Event Notification
26.1.3. Standard Events
26.1.4. Long Running Job Events
26.1.5. Event Log Queries
26.2. Working with Server Logs
26.3. Data Loss on Exported Primary Storage
26.4. Recovering a Lost Virtual Router
26.5. Maintenance mode not working on vCenter
26.6. Unable to deploy VMs from uploaded vSphere template
26.7. Unable to power on virtual machine on VMware
26.8. Load balancer rules fail after changing network offering
27. Introduction to the CloudStack API
27.1. Roles
27.2. API Reference Documentation
27.3. Getting Started
28. What's New in the API?
28.1. What's New in the API for 4.1
28.1.1. Reconfiguring Physical Networks in VMs
28.1.2. IPv6 Support in CloudStack
28.1.3. Additional VMX Settings
28.1.4. Resetting SSH Keys to Access VMs
28.1.5. Changed API Commands in 4.1
28.1.6. Added API Commands in 4.1-incubating
28.2. What's New in the API for 4.0
28.2.1. Changed API Commands in 4.0.0-incubating
28.2.2. Added API Commands in 4.0.0-incubating
28.3. What's New in the API for 3.0
28.3.1. Enabling Port 8096
28.3.2. Stopped VM
28.3.3. Change to Behavior of List Commands
28.3.4. Removed API commands
28.3.5. Added API commands in 3.0
28.3.6. Added CloudStack Error Codes
29. Calling the CloudStack API
29.1. Making API Requests
29.2. Signing API Requests
29.2.1. How to sign an API call with Python
29.3. Enabling API Call Expiration
29.4. Limiting the Rate of API Requests
29.4.1. Configuring the API Request Rate
29.4.2. Limitations on API Throttling
29.5. Responses
29.5.1. Response Formats: XML and JSON
29.5.2. Maximum Result Pages Returned
29.5.3. Error Handling
29.6. Asynchronous Commands
29.6.1. Job Status
29.6.2. Example
30. Working With Usage Data
30.1. Usage Record Format
30.1.1. Virtual Machine Usage Record Format
30.1.2. Network Usage Record Format
30.1.3. IP Address Usage Record Format
30.1.4. Disk Volume Usage Record Format
30.1.5. Template, ISO, and Snapshot Usage Record Format
30.1.6. Load Balancer Policy or Port Forwarding Rule Usage Record Format
30.1.7. Network Offering Usage Record Format
30.1.8. VPN User Usage Record Format
30.2. Usage Types
30.3. Example response from listUsageRecords
30.4. Dates in the Usage Record
A. Time Zones
B. Event Types
C. Alerts

Chapter 1. Concepts

1.1. What Is CloudStack?

CloudStack is an open source software platform that pools computing resources to build public, private, and hybrid Infrastructure as a Service (IaaS) clouds. CloudStack manages the network, storage, and compute nodes that make up a cloud infrastructure. Use CloudStack to deploy, manage, and configure cloud computing environments.
Typical users are service providers and enterprises. With CloudStack, you can:
  • Set up an on-demand, elastic cloud computing service. Service providers can sell self service virtual machine instances, storage volumes, and networking configurations over the Internet.
  • Set up an on-premise private cloud for use by employees. Rather than managing virtual machines in the same way as physical machines, with CloudStack an enterprise can offer self-service virtual machines to users without involving IT departments.
1000-foot-view.png: Overview of CloudStack

1.2. What Can CloudStack Do?

Multiple Hypervisor Support
CloudStack works with a variety of hypervisors, and a single cloud deployment can contain multiple hypervisor implementations. The current release of CloudStack supports pre-packaged enterprise solutions like Citrix XenServer and VMware vSphere, as well as KVM or Xen running on Ubuntu or CentOS.
Massively Scalable Infrastructure Management
CloudStack can manage tens of thousands of servers installed in multiple geographically distributed datacenters. The centralized management server scales linearly, eliminating the need for intermediate cluster-level management servers. No single component failure can cause cloud-wide outage. Periodic maintenance of the management server can be performed without affecting the functioning of virtual machines running in the cloud.
Automatic Configuration Management
CloudStack automatically configures each guest virtual machine’s networking and storage settings.
CloudStack internally manages a pool of virtual appliances to support the cloud itself. These appliances offer services such as firewalling, routing, DHCP, VPN access, console proxy, storage access, and storage replication. The extensive use of virtual appliances simplifies the installation, configuration, and ongoing management of a cloud deployment.
Graphical User Interface
CloudStack offers an administrator's Web interface, used for provisioning and managing the cloud, as well as an end-user's Web interface, used for running VMs and managing VM templates. The UI can be customized to reflect the desired service provider or enterprise look and feel.
API and Extensibility
CloudStack provides an API that gives programmatic access to all the management features available in the UI. The API is maintained and documented. This API enables the creation of command line tools and new user interfaces to suit particular needs. See the Developer’s Guide and API Reference, both available at Apache CloudStack Guides and Apache CloudStack API Reference respectively.
The CloudStack pluggable allocation architecture allows the creation of new types of allocators for the selection of storage and Hosts. See the Allocator Implementation Guide (http://docs.cloudstack.org/CloudStack_Documentation/Allocator_Implementation_Guide).
High Availability
CloudStack has a number of features to increase the availability of the system. The Management Server itself may be deployed in a multi-node installation where the servers are load balanced. MySQL may be configured to use replication to provide for a manual failover in the event of database loss. For the hosts, CloudStack supports NIC bonding and the use of separate networks for storage as well as iSCSI Multipath.

1.3. Deployment Architecture Overview

A CloudStack installation consists of two parts: the Management Server and the cloud infrastructure that it manages. When you set up and manage a CloudStack cloud, you provision resources such as hosts, storage devices, and IP addresses into the Management Server, and the Management Server manages those resources.
The minimum production installation consists of one machine running the CloudStack Management Server and another machine to act as the cloud infrastructure (in this case, a very simple infrastructure consisting of one host running hypervisor software). In its smallest deployment, a single machine can act as both the Management Server and the hypervisor host (using the KVM hypervisor).
basic-deployment.png: Basic two-machine deployment
A more full-featured installation consists of a highly-available multi-node Management Server installation and up to tens of thousands of hosts using any of several advanced networking setups. For information about deployment options, see the "Choosing a Deployment Architecture" section of the $PRODUCT; Installation Guide.

1.3.1. Management Server Overview

The Management Server is the CloudStack software that manages cloud resources. By interacting with the Management Server through its UI or API, you can configure and manage your cloud infrastructure.
The Management Server runs on a dedicated server or VM. It controls allocation of virtual machines to hosts and assigns storage and IP addresses to the virtual machine instances. The Management Server runs in a Tomcat container and requires a MySQL database for persistence.
The machine must meet the system requirements described in System Requirements.
The Management Server:
  • Provides the web user interface for the administrator and a reference user interface for end users.
  • Provides the APIs for CloudStack.
  • Manages the assignment of guest VMs to particular hosts.
  • Manages the assignment of public and private IP addresses to particular accounts.
  • Manages the allocation of storage to guests as virtual disks.
  • Manages snapshots, templates, and ISO images, possibly replicating them across data centers.
  • Provides a single point of configuration for the cloud.

1.3.2. Cloud Infrastructure Overview

The Management Server manages one or more zones (typically, datacenters) containing host computers where guest virtual machines will run. The cloud infrastructure is organized as follows:
  • Zone: Typically, a zone is equivalent to a single datacenter. A zone consists of one or more pods and secondary storage.
  • Pod: A pod is usually one rack of hardware that includes a layer-2 switch and one or more clusters.
  • Cluster: A cluster consists of one or more hosts and primary storage.
  • Host: A single compute node within a cluster. The hosts are where the actual cloud services run in the form of guest virtual machines.
  • Primary storage is associated with a cluster, and it stores the disk volumes for all the VMs running on hosts in that cluster.
  • Secondary storage is associated with a zone, and it stores templates, ISO images, and disk volume snapshots.
infrastructure_overview.png: Nested organization of a zone
More Information
For more information, see documentation on cloud infrastructure concepts.

1.3.3. Networking Overview

CloudStack offers two types of networking scenario:
  • Basic. For AWS-style networking. Provides a single network where guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
  • Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks.
For more details, see Network Setup.

Chapter 2. Cloud Infrastructure Concepts

2.1. About Regions

To increase reliability of the cloud, you can optionally group resources into multiple geographic regions. A region is the largest available organizational unit within a CloudStack deployment. A region is made up of several availability zones, where each zone is roughly equivalent to a datacenter. Each region is controlled by its own cluster of Management Servers, running in one of the zones. The zones in a region are typically located in close geographical proximity. Regions are a useful technique for providing fault tolerance and disaster recovery.
By grouping zones into regions, the cloud can achieve higher availability and scalability. User accounts can span regions, so that users can deploy VMs in multiple, widely-dispersed regions. Even if one of the regions becomes unavailable, the services are still available to the end-user through VMs deployed in another region. And by grouping communities of zones under their own nearby Management Servers, the latency of communications within the cloud is reduced compared to managing widely-dispersed zones from a single central Management Server.
Usage records can also be consolidated and tracked at the region level, creating reports or invoices for each geographic region.
region-overview.png: Nested structure of a region.
Regions are visible to the end user. When a user starts a guest VM, the user must select a region for their guest. Users might also be required to copy their private templates to additional regions to enable creation of guest VMs using their templates in those regions.

2.2. About Zones

A zone is the second largest organizational unit within a CloudStack deployment. A zone typically corresponds to a single datacenter, although it is permissible to have multiple zones in a datacenter. The benefit of organizing infrastructure into zones is to provide physical isolation and redundancy. For example, each zone can have its own power supply and network uplink, and the zones can be widely separated geographically (though this is not required).
A zone consists of:
  • One or more pods. Each pod contains one or more clusters of hosts and one or more primary storage servers.
  • Secondary storage, which is shared by all the pods in the zone.
zone-overview.png: Nested structure of a simple zone.
Zones are visible to the end user. When a user starts a guest VM, the user must select a zone for their guest. Users might also be required to copy their private templates to additional zones to enable creation of guest VMs using their templates in those zones.
Zones can be public or private. Public zones are visible to all users. This means that any user may create a guest in that zone. Private zones are reserved for a specific domain. Only users in that domain or its subdomains may create guests in that zone.
Hosts in the same zone are directly accessible to each other without having to go through a firewall. Hosts in different zones can access each other through statically configured VPN tunnels.
For each zone, the administrator must decide the following.
  • How many pods to place in a zone.
  • How many clusters to place in each pod.
  • How many hosts to place in each cluster.
  • How many primary storage servers to place in each cluster and total capacity for the storage servers.
  • How much secondary storage to deploy in a zone.
When you add a new zone, you will be prompted to configure the zone’s physical network and add the first pod, cluster, host, primary storage, and secondary storage.

2.3. About Pods

A pod often represents a single rack. Hosts in the same pod are in the same subnet. A pod is the second-largest organizational unit within a CloudStack deployment. Pods are contained within zones. Each zone can contain one or more pods. A pod consists of one or more clusters of hosts and one or more primary storage servers. Pods are not visible to the end user.
pod-overview.png: Nested structure of a simple pod

2.4. About Clusters

A cluster provides a way to group hosts. To be precise, a cluster is a XenServer server pool, a set of KVM servers, , or a VMware cluster preconfigured in vCenter. The hosts in a cluster all have identical hardware, run the same hypervisor, are on the same subnet, and access the same shared primary storage. Virtual machine instances (VMs) can be live-migrated from one host to another within the same cluster, without interrupting service to the user.
A cluster is the third-largest organizational unit within a CloudStack deployment. Clusters are contained within pods, and pods are contained within zones. Size of the cluster is limited by the underlying hypervisor, although the CloudStack recommends less in most cases; see Best Practices.
A cluster consists of one or more hosts and one or more primary storage servers.
cluster-overview.png: Structure of a simple cluster
CloudStack allows multiple clusters in a cloud deployment.
Even when local storage is used exclusively, clusters are still required organizationally, even if there is just one host per cluster.
When VMware is used, every VMware cluster is managed by a vCenter server. Administrator must register the vCenter server with CloudStack. There may be multiple vCenter servers per zone. Each vCenter server may manage multiple VMware clusters.

2.5. About Hosts

A host is a single computer. Hosts provide the computing resources that run the guest virtual machines. Each host has hypervisor software installed on it to manage the guest VMs. For example, a Linux KVM-enabled server, a Citrix XenServer server, and an ESXi server are hosts.
The host is the smallest organizational unit within a CloudStack deployment. Hosts are contained within clusters, clusters are contained within pods, and pods are contained within zones.
Hosts in a CloudStack deployment:
  • Provide the CPU, memory, storage, and networking resources needed to host the virtual machines
  • Interconnect using a high bandwidth TCP/IP network and connect to the Internet
  • May reside in multiple data centers across different geographic locations
  • May have different capacities (different CPU speeds, different amounts of RAM, etc.), although the hosts within a cluster must all be homogeneous
Additional hosts can be added at any time to provide more capacity for guest VMs.
CloudStack automatically detects the amount of CPU and memory resources provided by the Hosts.
Hosts are not visible to the end user. An end user cannot determine which host their guest has been assigned to.
For a host to function in CloudStack, you must do the following:
  • Install hypervisor software on the host
  • Assign an IP address to the host
  • Ensure the host is connected to the CloudStack Management Server

2.6. About Primary Storage

Primary storage is associated with a cluster, and it stores the disk volumes for all the VMs running on hosts in that cluster. You can add multiple primary storage servers to a cluster. At least one is required. It is typically located close to the hosts for increased performance.
CloudStack is designed to work with all standards-compliant iSCSI and NFS servers that are supported by the underlying hypervisor, including, for example:
  • Dell EqualLogic™ for iSCSI
  • Network Appliances filers for NFS and iSCSI
  • Scale Computing for NFS
If you intend to use only local disk for your installation, you can skip to Add Secondary Storage.

2.7. About Secondary Storage

Secondary storage is associated with a zone, and it stores the following:
  • Templates — OS images that can be used to boot VMs and can include additional configuration information, such as installed applications
  • ISO images — disc images containing data or bootable media for operating systems
  • Disk volume snapshots — saved copies of VM data which can be used for data recovery or to create new templates
The items in zone-based NFS secondary storage are available to all hosts in the zone. CloudStack manages the allocation of guest virtual disks to particular primary storage devices.
To make items in secondary storage available to all hosts throughout the cloud, you can add OpenStack Object Storage (Swift, swift.openstack.org) in addition to the zone-based NFS secondary storage. When using Swift, you configure Swift storage for the entire CloudStack, then set up NFS secondary storage for each zone as usual. The NFS storage in each zone acts as a staging area through which all templates and other secondary storage data pass before being forwarded to Swift. The Swift storage acts as a cloud-wide resource, making templates and other data available to any zone in the cloud. There is no hierarchy in the Swift storage, just one Swift container per storage object. Any secondary storage in the whole cloud can pull a container from Swift at need. It is not necessary to copy templates and snapshots from one zone to another, as would be required when using zone NFS alone. Everything is available everywhere.

2.8. About Physical Networks

Part of adding a zone is setting up the physical network. One or (in an advanced zone) more physical networks can be associated with each zone. The network corresponds to a NIC on the hypervisor host. Each physical network can carry one or more types of network traffic. The choices of traffic type for each network vary depending on whether you are creating a zone with basic networking or advanced networking.
A physical network is the actual network hardware and wiring in a zone. A zone can have multiple physical networks. An administrator can:
  • Add/Remove/Update physical networks in a zone
  • Configure VLANs on the physical network
  • Configure a name so the network can be recognized by hypervisors
  • Configure the service providers (firewalls, load balancers, etc.) available on a physical network
  • Configure the IP addresses trunked to a physical network
  • Specify what type of traffic is carried on the physical network, as well as other properties like network speed

2.8.1. Basic Zone Network Traffic Types

When basic networking is used, there can be only one physical network in the zone. That physical network carries the following traffic types:
  • Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. Each pod in a basic zone is a broadcast domain, and therefore each pod has a different IP range for the guest network. The administrator must configure the IP range for each pod.
  • Management. When CloudStack's internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by CloudStack to perform various tasks in the cloud), and any other component that communicates directly with the CloudStack Management Server. You must configure the IP range for the system VMs to use.

    Note

    We strongly recommend the use of separate NICs for management traffic and guest traffic.
  • Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudStack UI to acquire these IPs to implement NAT between their guest network and the public network, as described in Acquiring a New IP Address.
  • Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. CloudStack uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
In a basic network, configuring the physical network is fairly straightforward. In most cases, you only need to configure one guest network to carry traffic that is generated by guest VMs. If you use a NetScaler load balancer and enable its elastic IP and elastic load balancing (EIP and ELB) features, you must also configure a network to carry public traffic. CloudStack takes care of presenting the necessary network configuration steps to you in the UI when you add a new zone.

2.8.2. Basic Zone Guest IP Addresses

When basic networking is used, CloudStack will assign IP addresses in the CIDR of the pod to the guests in that pod. The administrator must add a Direct IP range on the pod for this purpose. These IPs are in the same VLAN as the hosts.

2.8.3. Advanced Zone Network Traffic Types

When advanced networking is used, there can be multiple physical networks in the zone. Each physical network can carry one or more traffic types, and you need to let CloudStack know which type of network traffic you want each network to carry. The traffic types in an advanced zone are:
  • Guest. When end users run VMs, they generate guest traffic. The guest VMs communicate with each other over a network that can be referred to as the guest network. This network can be isolated or shared. In an isolated guest network, the administrator needs to reserve VLAN ranges to provide isolation for each CloudStack account’s network (potentially a large number of VLANs). In a shared guest network, all guest VMs share a single network.
  • Management. When CloudStack’s internal resources communicate with each other, they generate management traffic. This includes communication between hosts, system VMs (VMs used by CloudStack to perform various tasks in the cloud), and any other component that communicates directly with the CloudStack Management Server. You must configure the IP range for the system VMs to use.
  • Public. Public traffic is generated when VMs in the cloud access the Internet. Publicly accessible IPs must be allocated for this purpose. End users can use the CloudStack UI to acquire these IPs to implement NAT between their guest network and the public network, as described in “Acquiring a New IP Address” in the Administration Guide.
  • Storage. While labeled "storage" this is specifically about secondary storage, and doesn't affect traffic for primary storage. This includes traffic such as VM templates and snapshots, which is sent between the secondary storage VM and secondary storage servers. CloudStack uses a separate Network Interface Controller (NIC) named storage NIC for storage network traffic. Use of a storage NIC that always operates on a high bandwidth network allows fast template and snapshot copying. You must configure the IP range to use for the storage network.
These traffic types can each be on a separate physical network, or they can be combined with certain restrictions. When you use the Add Zone wizard in the UI to create a new zone, you are guided into making only valid choices.

2.8.4. Advanced Zone Guest IP Addresses

When advanced networking is used, the administrator can create additional networks for use by the guests. These networks can span the zone and be available to all accounts, or they can be scoped to a single account, in which case only the named account may create guests that attach to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The administrator may provision thousands of these networks if desired.

2.8.5. Advanced Zone Public IP Addresses

When advanced networking is used, the administrator can create additional networks for use by the guests. These networks can span the zone and be available to all accounts, or they can be scoped to a single account, in which case only the named account may create guests that attach to these networks. The networks are defined by a VLAN ID, IP range, and gateway. The administrator may provision thousands of these networks if desired.

2.8.6. System Reserved IP Addresses

In each zone, you need to configure a range of reserved IP addresses for the management network. This network carries communication between the CloudStack Management Server and various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP.
The reserved IP addresses must be unique across the cloud. You cannot, for example, have a host in one zone which has the same private IP address as a host in another zone.
The hosts in a pod are assigned private IP addresses. These are typically RFC1918 addresses. The Console Proxy and Secondary Storage system VMs are also allocated private IP addresses in the CIDR of the pod that they are created in.
Make sure computing servers and Management Servers use IP addresses outside of the System Reserved IP range. For example, suppose the System Reserved IP range starts at 192.168.154.2 and ends at 192.168.154.7. CloudStack can use .2 to .7 for System VMs. This leaves the rest of the pod CIDR, from .8 to .254, for the Management Server and hypervisor hosts.
In all zones:
Provide private IPs for the system in each pod and provision them in CloudStack.
For KVM and XenServer, the recommended number of private IPs per pod is one per host. If you expect a pod to grow, add enough private IPs now to accommodate the growth.
In a zone that uses advanced networking:
For zones with advanced networking, we recommend provisioning enough private IPs for your total number of customers, plus enough for the required CloudStack System VMs. Typically, about 10 additional IPs are required for the System VMs. For more information about System VMs, see Working with System Virtual Machines in the Administrator's Guide.
When advanced networking is being used, the number of private IP addresses available in each pod varies depending on which hypervisor is running on the nodes in that pod. Citrix XenServer and KVM use link-local addresses, which in theory provide more than 65,000 private IP addresses within the address block. As the pod grows over time, this should be more than enough for any reasonable number of hosts as well as IP addresses for guest virtual routers. VMWare ESXi, by contrast uses any administrator-specified subnetting scheme, and the typical administrator provides only 255 IPs per pod. Since these are shared by physical machines, the guest virtual router, and other entities, it is possible to run out of private IPs when scaling up a pod whose nodes are running ESXi.
To ensure adequate headroom to scale private IP space in an ESXi pod that uses advanced networking, use one or both of the following techniques:
  • Specify a larger CIDR block for the subnet. A subnet mask with a /20 suffix will provide more than 4,000 IP addresses.
  • Create multiple pods, each with its own subnet. For example, if you create 10 pods and each pod has 255 IPs, this will provide 2,550 IP addresses.

Chapter 3. Installation

3.1. Who Should Read This

For those who have already gone through a design phase and planned a more sophisticated deployment, or those who are ready to start scaling up a trial installation. With the following procedures, you can start using the more powerful features of CloudStack, such as advanced VLAN networking, high availability, additional network elements such as load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware vSphere.

3.2. Overview of Installation Steps

For anything more than a simple trial installation, you will need guidance for a variety of configuration choices. It is strongly recommended that you read the following:
  • Choosing a Deployment Architecture
  • Choosing a Hypervisor: Supported Features
  • Network Setup
  • Storage Setup
  • Best Practices
  1. Make sure you have the required hardware ready. See Section 3.3, “Minimum System Requirements”
  2. Install the Management Server (choose single-node or multi-node). See Section 3.5, “Management Server Installation”
  3. Log in to the UI. See Chapter 4, User Interface
  4. Add a zone. Includes the first pod, cluster, and host. See Section 5.3, “Adding a Zone”
  5. Add more pods (optional). See Section 5.4, “Adding a Pod”
  6. Add more clusters (optional). See Section 5.5, “Adding a Cluster”
  7. Add more hosts (optional). See Section 5.6, “Adding a Host”
  8. Add more primary storage (optional). See Section 5.7, “Add Primary Storage”
  9. Add more secondary storage (optional). See Section 5.8, “Add Secondary Storage”
  10. Try using the cloud. See Section 5.9, “Initialize and Test”

3.3. Minimum System Requirements

3.3.1. Management Server, Database, and Storage System Requirements

The machines that will run the Management Server and MySQL database must meet the following requirements. The same machines can also be used to provide primary and secondary storage, such as via localdisk or NFS. The Management Server may be placed on a virtual machine.
  • Operating system:
    • Preferred: CentOS/RHEL 6.3+ or Ubuntu 12.04(.1)
  • 64-bit x86 CPU (more cores results in better performance)
  • 4 GB of memory
  • 250 GB of local disk (more results in better capability; 500 GB recommended)
  • At least 1 NIC
  • Statically allocated IP address
  • Fully qualified domain name as returned by the hostname command

3.3.2. Host/Hypervisor System Requirements

The host is where the cloud services run in the form of guest virtual machines. Each host is one machine that meets the following requirements:
  • Must support HVM (Intel-VT or AMD-V enabled).
  • 64-bit x86 CPU (more cores results in better performance)
  • Hardware virtualization support required
  • 4 GB of memory
  • 36 GB of local disk
  • At least 1 NIC
  • Note

    If DHCP is used for hosts, ensure that no conflict occurs between DHCP server used for these hosts and the DHCP router created by CloudStack.
  • Latest hotfixes applied to hypervisor software
  • When you deploy CloudStack, the hypervisor host must not have any VMs already running
  • All hosts within a cluster must be homogeneous. The CPUs must be of the same type, count, and feature flags.
Hosts have additional requirements depending on the hypervisor. See the requirements listed at the top of the Installation section for your chosen hypervisor:

Warning

Be sure you fulfill the additional hypervisor requirements and installation steps provided in this Guide. Hypervisor hosts must be properly prepared to work with CloudStack. For example, the requirements for XenServer are listed under Citrix XenServer Installation.

3.4. Configure package repository

CloudStack is only distributed from source from the official mirrors. However, members of the CloudStack community may build convenience binaries so that users can install Apache CloudStack without needing to build from source.
If you didn't follow the steps to build your own packages from source in the sections for Section 3.6, “Building RPMs from Source” or Section 3.7, “Building DEB packages” you may find pre-built DEB and RPM packages for your convenience linked from the downloads page.

Note

These repositories contain both the Management Server and KVM Hypervisor packages.

3.4.1. DEB package repository

You can add a DEB package repository to your apt sources with the following commands. Please note that only packages for Ubuntu 12.04 LTS (precise) are being built at this time.
Use your preferred editor and open (or create) /etc/apt/sources.list.d/cloudstack.list. Add the community provided repository to the file:
deb http://cloudstack.apt-get.eu/ubuntu precise 4.0
We now have to add the public key to the trusted keys.
$ wget -O - http://cloudstack.apt-get.eu/release.asc|apt-key add -
Now update your local apt cache.
$ apt-get update
Your DEB package repository should now be configured and ready for use.

3.4.2. RPM package repository

There is a RPM package repository for CloudStack so you can easily install on RHEL based platforms.
If you're using an RPM-based system, you'll want to add the Yum repository so that you can install CloudStack with Yum.
Yum repository information is found under /etc/yum.repos.d. You'll see several .repo files in this directory, each one denoting a specific repository.
To add the CloudStack repository, create /etc/yum.repos.d/cloudstack.repo and insert the following information.
[cloudstack]
name=cloudstack
baseurl=http://cloudstack.apt-get.eu/rhel/4.0/
enabled=1
gpgcheck=0
Now you should be able to install CloudStack using Yum.

3.5. Management Server Installation

3.5.1. Management Server Installation Overview

This section describes installing the Management Server. There are two slightly different installation flows, depending on how many Management Server nodes will be in your cloud:
  • A single Management Server node, with MySQL on the same node.
  • Multiple Management Server nodes, with MySQL on a node separate from the Management Servers.
In either case, each machine must meet the system requirements described in System Requirements.

Warning

For the sake of security, be sure the public Internet can not access port 8096 or port 8250 on the Management Server.
The procedure for installing the Management Server is:
  1. Prepare the Operating System
  2. (XenServer only) Download and install vhd-util.
  3. Install the First Management Server
  4. Install and Configure the MySQL database
  5. Prepare NFS Shares
  6. Prepare and Start Additional Management Servers (optional)
  7. Prepare the System VM Template

3.5.2. Prepare the Operating System

The OS must be prepared to host the Management Server using the following steps. These steps must be performed on each Management Server node.
  1. Log in to your OS as root.
  2. Check for a fully qualified hostname.
    hostname --fqdn
    This should return a fully qualified hostname such as "management1.lab.example.org". If it does not, edit /etc/hosts so that it does.
  3. Make sure that the machine can reach the Internet.
    ping www.cloudstack.org
  4. Turn on NTP for time synchronization.

    Note

    NTP is required to synchronize the clocks of the servers in your cloud.
    1. Install NTP.
      yum install ntp
      apt-get install openntpd
  5. Repeat all of these steps on every host where the Management Server will be installed.

3.5.3. Install the Management Server on the First Host

The first step in installation, whether you are installing the Management Server on one host or many, is to install the software on a single node.

Note

If you are planning to install the Management Server on multiple nodes for high availability, do not proceed to the additional nodes yet. That step will come later.
The CloudStack Management server can be installed using either RPM or DEB packages. These packages will depend on everything you need to run the Management server.

3.5.3.1. Install on CentOS/RHEL

We start by installing the required packages:
yum install cloud-client

3.5.3.2. Install on Ubuntu

apt-get install cloud-client

3.5.3.3. Downloading vhd-util

This procedure is required only for installations where XenServer is installed on the hypervisor hosts.
Before setting up the Management Server, download vhd-util from vhd-util.
If the Management Server is RHEL or CentOS, copy vhd-util to /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver.
If the Management Server is Ubuntu, copy vhd-util to /usr/lib/cloud/common/scripts/vm/hypervisor/xenserver.

3.5.4. Install the database server

The CloudStack management server uses a MySQL database server to store its data. When you are installing the management server on a single node, you can install the MySQL server locally. For an installation that has multiple management server nodes, we assume the MySQL database also runs on a separate node.
CloudStack has been tested with MySQL 5.1 and 5.5. These versions are included in RHEL/CentOS and Ubuntu.

3.5.4.1. Install the Database on the Management Server Node

This section describes how to install MySQL on the same machine with the Management Server. This technique is intended for a simple deployment that has a single Management Server node. If you have a multi-node Management Server deployment, you will typically use a separate node for MySQL. See Section 3.5.4.2, “Install the Database on a Separate Node”.
  1. Install MySQL from the package repository of your distribution:
    yum install mysql-server
    apt-get install mysql-server
  2. Open the MySQL configuration file. The configuration file is /etc/my.cnf or /etc/mysql/my.cnf, depending on your OS.
  3. Insert the following lines in the [mysqld] section.
    You can put these lines below the datadir line. The max_connections parameter should be set to 350 multiplied by the number of Management Servers you are deploying. This example assumes one Management Server.

    Note

    On Ubuntu, you can also create a file /etc/mysql/conf.d/cloudstack.cnf and add these directives there. Don't forget to add [mysqld] on the first line of the file.
    innodb_rollback_on_timeout=1
    innodb_lock_wait_timeout=600
    max_connections=350
    log-bin=mysql-bin
    binlog-format = 'ROW'
  4. Start or restart MySQL to put the new configuration into effect.
    On RHEL/CentOS, MySQL doesn't automatically start after installation. Start it manually.
    service mysqld start
    On Ubuntu, restart MySQL.
    service mysqld restart
  5. (CentOS and RHEL only; not required on Ubuntu)

    Warning

    On RHEL and CentOS, MySQL does not set a root password by default. It is very strongly recommended that you set a root password as a security precaution.
    Run the following command to secure your installation. You can answer "Y" to all questions.
    mysql_secure_installation
  6. CloudStack can be blocked by security mechanisms, such as SELinux. Disable SELinux to ensure + that the Agent has all the required permissions.
    Configure SELinux (RHEL and CentOS):
    1. Check whether SELinux is installed on your machine. If not, you can skip this section.
      In RHEL or CentOS, SELinux is installed and enabled by default. You can verify this with:
      $ rpm -qa | grep selinux
    2. Set the SELINUX variable in /etc/selinux/config to "permissive". This ensures that the permissive setting will be maintained after a system reboot.
      In RHEL or CentOS:
      vi /etc/selinux/config
      Change the following line
      SELINUX=enforcing
      to this:
      SELINUX=permissive
    3. Set SELinux to permissive starting immediately, without requiring a system reboot.
      $ setenforce permissive
  7. Set up the database. The following command creates the "cloud" user on the database.
    • In dbpassword, specify the password to be assigned to the "cloud" user. You can choose to provide no password although that is not recommended.
    • In deploy-as, specify the username and password of the user deploying the database. In the following command, it is assumed the root user is deploying the database and creating the "cloud" user.
    • (Optional) For encryption_type, use file or web to indicate the technique used to pass in the database encryption password. Default: file. See Section 3.5.5, “About Password and Key Encryption”.
    • (Optional) For management_server_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack properties file. Default: password. It is highly recommended that you replace this with a more secure value. See Section 3.5.5, “About Password and Key Encryption”.
    • (Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack database. Default: password. It is highly recommended that you replace this with a more secure value. See Section 3.5.5, “About Password and Key Encryption”.
    • (Optional) For management_server_ip, you may explicitly specify cluster management server node IP. If not specified, the local IP address will be used.
    cloudstack-setup-databases cloud:<dbpassword>@localhost \
    --deploy-as=root:<password> \
    -e <encryption_type> \
    -m <management_server_key> \
    -k <database_key> \
    -i <management_server_ip>
    When this script is finished, you should see a message like “Successfully initialized the database.”
  8. If you are running the KVM hypervisor on the same machine with the Management Server, edit /etc/sudoers and add the following line:
    Defaults:cloud !requiretty
  9. Now that the database is set up, you can finish configuring the OS for the Management Server. This command will set up iptables, sudoers, and start the Management Server.
    # cloudstack-setup-management
    You should see the message “CloudStack Management Server setup is done.”

3.5.4.2. Install the Database on a Separate Node

This section describes how to install MySQL on a standalone machine, separate from the Management Server. This technique is intended for a deployment that includes several Management Server nodes. If you have a single-node Management Server deployment, you will typically use the same node for MySQL. See Section 3.5.4.1, “Install the Database on the Management Server Node”.

Note

The management server doesn't require a specific distribution for the MySQL node. You can use a distribution or Operating System of your choice. Using the same distribution as the management server is recommended, but not required. See Section 3.3.1, “Management Server, Database, and Storage System Requirements”.
  1. Install MySQL from the package repository from your distribution:
    yum install mysql-server
    apt-get install mysql-server
  2. Edit the MySQL configuration (/etc/my.cnf or /etc/mysql/my.cnf, depending on your OS) and insert the following lines in the [mysqld] section. You can put these lines below the datadir line. The max_connections parameter should be set to 350 multiplied by the number of Management Servers you are deploying. This example assumes two Management Servers.

    Note

    On Ubuntu, you can also create /etc/mysql/conf.d/cloudstack.cnf file and add these directives there. Don't forget to add [mysqld] on the first line of the file.
    innodb_rollback_on_timeout=1
    innodb_lock_wait_timeout=600
    max_connections=700
    log-bin=mysql-bin
    binlog-format = 'ROW'
    bind-address = 0.0.0.0
  3. Start or restart MySQL to put the new configuration into effect.
    On RHEL/CentOS, MySQL doesn't automatically start after installation. Start it manually.
    service mysqld start
    On Ubuntu, restart MySQL.
    service mysqld restart
  4. (CentOS and RHEL only; not required on Ubuntu)

    Warning

    On RHEL and CentOS, MySQL does not set a root password by default. It is very strongly recommended that you set a root password as a security precaution.
    Run the following command to secure your installation. You can answer "Y" to all questions except "Disallow root login remotely?". Remote root login is required to set up the databases.
    mysql_secure_installation
  5. If a firewall is present on the system, open TCP port 3306 so external MySQL connections can be established.
    On Ubuntu, UFW is the default firewall. Open the port with this command:
    ufw allow mysql
    On RHEL/CentOS:
    1. Edit the /etc/sysconfig/iptables file and add the following line at the beginning of the INPUT chain.
      -A INPUT -p tcp --dport 3306 -j ACCEPT
    2. Now reload the iptables rules.
      service iptables restart
  6. Return to the root shell on your first Management Server.
  7. Set up the database. The following command creates the cloud user on the database.
    • In dbpassword, specify the password to be assigned to the cloud user. You can choose to provide no password.
    • In deploy-as, specify the username and password of the user deploying the database. In the following command, it is assumed the root user is deploying the database and creating the cloud user.
    • (Optional) For encryption_type, use file or web to indicate the technique used to pass in the database encryption password. Default: file. See Section 3.5.5, “About Password and Key Encryption”.
    • (Optional) For management_server_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack properties file. Default: password. It is highly recommended that you replace this with a more secure value. See About Password and Key Encryption.
    • (Optional) For database_key, substitute the default key that is used to encrypt confidential parameters in the CloudStack database. Default: password. It is highly recommended that you replace this with a more secure value. See Section 3.5.5, “About Password and Key Encryption”.
    • (Optional) For management_server_ip, you may explicitly specify cluster management server node IP. If not specified, the local IP address will be used.
    cloudstack-setup-databases cloud:<dbpassword>@<ip address mysql server> \
    --deploy-as=root:<password> \
    -e <encryption_type> \
    -m <management_server_key> \
    -k <database_key> \
    -i <management_server_ip>
    When this script is finished, you should see a message like “Successfully initialized the database.”

3.5.5. About Password and Key Encryption

CloudStack stores several sensitive passwords and secret keys that are used to provide security. These values are always automatically encrypted:
  • Database secret key
  • Database password
  • SSH keys
  • Compute node root password
  • VPN password
  • User API secret key
  • VNC password
CloudStack uses the Java Simplified Encryption (JASYPT) library. The data values are encrypted and decrypted using a database secret key, which is stored in one of CloudStack’s internal properties files along with the database password. The other encrypted values listed above, such as SSH keys, are in the CloudStack internal database.
Of course, the database secret key itself can not be stored in the open – it must be encrypted. How then does CloudStack read it? A second secret key must be provided from an external source during Management Server startup. This key can be provided in one of two ways: loaded from a file or provided by the CloudStack administrator. The CloudStack database has a new configuration setting that lets it know which of these methods will be used. If the encryption type is set to "file," the key must be in a file in a known location. If the encryption type is set to "web," the administrator runs the utility com.cloud.utils.crypt.EncryptionSecretKeySender, which relays the key to the Management Server over a known port.
The encryption type, database secret key, and Management Server secret key are set during CloudStack installation. They are all parameters to the CloudStack database setup script (cloudstack-setup-databases). The default values are file, password, and password. It is, of course, highly recommended that you change these to more secure keys.

3.5.6. Prepare NFS Shares

CloudStack needs a place to keep primary and secondary storage (see Cloud Infrastructure Overview). Both of these can be NFS shares. This section tells how to set up the NFS shares before adding the storage to CloudStack.

Alternative Storage

NFS is not the only option for primary or secondary storage. For example, you may use Ceph RBD, GlusterFS, iSCSI, and others. The choice of storage system will depend on the choice of hypervisor and whether you are dealing with primary or secondary storage.
The requirements for primary and secondary storage are described in:
A production installation typically uses a separate NFS server. See Section 3.5.6.1, “Using a Separate NFS Server”.
You can also use the Management Server node as the NFS server. This is more typical of a trial installation, but is technically possible in a larger deployment. See Section 3.5.6.2, “Using the Management Server as the NFS Server”.

3.5.6.1. Using a Separate NFS Server

This section tells how to set up NFS shares for secondary and (optionally) primary storage on an NFS server running on a separate node from the Management Server.
The exact commands for the following steps may vary depending on your operating system version.

Warning

(KVM only) Ensure that no volume is already mounted at your NFS mount point.
  1. On the storage server, create an NFS share for secondary storage and, if you are using NFS for primary storage as well, create a second NFS share. For example:
    # mkdir -p /export/primary
    # mkdir -p /export/secondary
    
  2. To configure the new directories as NFS exports, edit /etc/exports. Export the NFS share(s) with rw,async,no_root_squash. For example:
    # vi /etc/exports
    Insert the following line.
    /export  *(rw,async,no_root_squash)
  3. Export the /export directory.
    # exportfs -a
  4. On the management server, create a mount point for secondary storage. For example:
    # mkdir -p /mnt/secondary
  5. Mount the secondary storage on your Management Server. Replace the example NFS server name and NFS share paths below with your own.
    # mount -t nfs nfsservername:/nfs/share/secondary /mnt/secondary

3.5.6.2. Using the Management Server as the NFS Server

This section tells how to set up NFS shares for primary and secondary storage on the same node with the Management Server. This is more typical of a trial installation, but is technically possible in a larger deployment. It is assumed that you will have less than 16TB of storage on the host.
The exact commands for the following steps may vary depending on your operating system version.
  1. On RHEL/CentOS systems, you'll need to install the nfs-utils package:
    $ sudo yum install nfs-utils
    
  2. On the Management Server host, create two directories that you will use for primary and secondary storage. For example:
    # mkdir -p /export/primary
    # mkdir -p /export/secondary
    
  3. To configure the new directories as NFS exports, edit /etc/exports. Export the NFS share(s) with rw,async,no_root_squash. For example:
    # vi /etc/exports
    Insert the following line.
    /export  *(rw,async,no_root_squash)
  4. Export the /export directory.
    # exportfs -a
  5. Edit the /etc/sysconfig/nfs file.
    # vi /etc/sysconfig/nfs
    Uncomment the following lines:
    LOCKD_TCPPORT=32803
    LOCKD_UDPPORT=32769
    MOUNTD_PORT=892
    RQUOTAD_PORT=875
    STATD_PORT=662
    STATD_OUTGOING_PORT=2020
    
  6. Edit the /etc/sysconfig/iptables file.
    # vi /etc/sysconfig/iptables
    Add the following lines at the beginning of the INPUT chain where <NETWORK> is the network that you'll be using:
    -A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 111 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 111 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 2049 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 32803 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 32769 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 892 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 892 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 875 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 875 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p tcp --dport 662 -j ACCEPT
    -A INPUT -s <NETWORK> -m state --state NEW -p udp --dport 662 -j ACCEPT                
    
  7. Run the following commands:
    # service iptables restart
    # service iptables save
    
  8. If NFS v4 communication is used between client and server, add your domain to /etc/idmapd.conf on both the hypervisor host and Management Server.
    # vi /etc/idmapd.conf
    Remove the character # from the beginning of the Domain line in idmapd.conf and replace the value in the file with your own domain. In the example below, the domain is company.com.
    Domain = company.com
  9. Reboot the Management Server host.
    Two NFS shares called /export/primary and /export/secondary are now set up.
  10. It is recommended that you test to be sure the previous steps have been successful.
    1. Log in to the hypervisor host.
    2. Be sure NFS and rpcbind are running. The commands might be different depending on your OS. For example:
      # service rpcbind start
      # service nfs start
      # chkconfig nfs on
      # chkconfig rpcbind on
      # reboot                        
      
    3. Log back in to the hypervisor host and try to mount the /export directories. For example (substitute your own management server name):
      # mkdir /primarymount
      # mount -t nfs <management-server-name>:/export/primary /primarymount
      # umount /primarymount
      # mkdir /secondarymount
      # mount -t nfs <management-server-name>:/export/secondary /secondarymount
      # umount /secondarymount                        
      

3.5.7. Prepare and Start Additional Management Servers

For your second and subsequent Management Servers, you will install the Management Server software, connect it to the database, and set up the OS for the Management Server.
  1. This step is required only for installations where XenServer is installed on the hypervisor hosts.
    Download vhd-util from vhd-util
    If the Management Server is RHEL or CentOS, copy vhd-util to /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver.
    If the Management Server is Ubuntu, copy vhd-util to /usr/lib/cloud/common/scripts/vm/hypervisor/xenserver/vhd-util.
  2. Ensure that necessary services are started and set to start on boot.
    # service rpcbind start
    # service nfs start
    # chkconfig nfs on
    # chkconfig rpcbind on
    
  3. Configure the database client. Note the absence of the --deploy-as argument in this case. (For more details about the arguments to this command, see Section 3.5.4.2, “Install the Database on a Separate Node”.)
    # cloudstack-setup-databases cloud:dbpassword@dbhost -e encryption_type -m management_server_key -k database_key -i management_server_ip
    
  4. Configure the OS and start the Management Server:
    # cloudstack-setup-management
    The Management Server on this node should now be running.
  5. Repeat these steps on each additional Management Server.
  6. Be sure to configure a load balancer for the Management Servers. See Section 22.2, “Management Server Load Balancing”.

3.5.8. Prepare the System VM Template

Secondary storage must be seeded with a template that is used for CloudStack system VMs.

Note

When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
  1. On the Management Server, run one or more of the following cloud-install-sys-tmplt commands to retrieve and decompress the system VM template. Run the command for each hypervisor type that you expect end users to run in this Zone.
    If your secondary storage mount point is not named /mnt/secondary, substitute your own mount point name.
    If you set the CloudStack database encryption type to "web" when you set up the database, you must now add the parameter -s <management-server-secret-key>. See Section 3.5.5, “About Password and Key Encryption”.
    This process will require approximately 5 GB of free space on the local file system and up to 30 minutes each time it runs.
    • For XenServer:
      # /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2 -h xenserver -s <optional-management-server-secret-key> -F
    • For vSphere:
      # /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/burbank/burbank-systemvm-08012012.ova -h vmware -s <optional-management-server-secret-key>  -F
    • For KVM:
      # /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /mnt/secondary -u http://download.cloud.com/templates/acton/acton-systemvm-02062012.qcow2.bz2 -h kvm -s <optional-management-server-secret-key> -F
    On Ubuntu, use the following path instead:
    # /usr/lib/cloud/common/scripts/storage/secondary/cloud-install-sys-tmplt
  2. If you are using a separate NFS server, perform this step. If you are using the Management Server as the NFS server, you MUST NOT perform this step.
    When the script has finished, unmount secondary storage and remove the created directory.
    # umount /mnt/secondary
    # rmdir /mnt/secondary
  3. Repeat these steps for each secondary storage server.

3.5.9. Installation Complete! Next Steps

Congratulations! You have now installed CloudStack Management Server and the database it uses to persist system data.
installation-complete.png: Finished installs with single Management Server and multiple Management Servers
What should you do next?
  • Even without adding any cloud infrastructure, you can run the UI to get a feel for what's offered and how you will interact with CloudStack on an ongoing basis. See Log In to the UI.
  • When you're ready, add the cloud infrastructure and try running some virtual machines on it, so you can watch how CloudStack manages the infrastructure. See Provision Your Cloud Infrastructure.

3.6. Building RPMs from Source

As mentioned previously in Section 3.8, “Prerequisites for building Apache CloudStack”, you will need to install several prerequisites before you can build packages for CloudStack. Here we'll assume you're working with a 64-bit build of CentOS or Red Hat Enterprise Linux.
# yum groupinstall "Development Tools"
# yum install java-1.6.0-openjdk-devel.x86_64 genisoimage mysql mysql-server ws-common-utils MySQL-python tomcat6 createrepo
Next, you'll need to install build-time dependencies for CloudStack with Maven. We're using Maven 3, so you'll want to grab a Maven 3 tarball and uncompress it in your home directory (or whatever location you prefer):
$ tar zxvf apache-maven-3.0.4-bin.tar.gz
$ export PATH=/usr/local/apache-maven-3.0.4//bin:$PATH
Maven also needs to know where Java is, and expects the JAVA_HOME environment variable to be set:
$ export JAVA_HOME=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/
Verify that Maven is installed correctly:
$ mvn --version
You probably want to ensure that your environment variables will survive a logout/reboot. Be sure to update ~/.bashrc with the PATH and JAVA_HOME variables.
Building RPMs for $PRODUCT; is fairly simple. Assuming you already have the source downloaded and have uncompressed the tarball into a local directory, you're going to be able to generate packages in just a few minutes.

Packaging has Changed

If you've created packages for $PRODUCT; previously, you should be aware that the process has changed considerably since the project has moved to using Apache Maven. Please be sure to follow the steps in this section closely.

3.6.1. Generating RPMS

Now that we have the prerequisites and source, you will cd to the packaging/centos63/ directory.
Generating RPMs is done using the package.sh script:
$./package.sh
That will run for a bit and then place the finished packages in dist/rpmbuild/RPMS/x86_64/.
You should see six RPMs in that directory:
  • cloudstack-agent-4.1.0.el6.x86_64.rpm
  • cloudstack-awsapi-4.1.0.el6.x86_64.rpm
  • cloudstack-cli-4.1.0.el6.x86_64.rpm
  • cloudstack-common-4.1.0.el6.x86_64.rpm
  • cloudstack-management-4.1.0.el6.x86_64.rpm
  • cloudstack-usage-4.1.0.el6.x86_64.rpm

Filename Variations

The file names may vary slightly. For instance, if you were to build the RPMs on a Fedora 18 system, you'd see "fc18" instead of "el6" in the filename. (Fedora 18 isn't a supported platform at this time, just providing an example.)

3.6.1.1. Creating a yum repo

While RPMs is a useful packaging format - it's most easily consumed from Yum repositories over a network. The next step is to create a Yum Repo with the finished packages:
$ mkdir -p ~/tmp/repo
$ cp dist/rpmbuild/RPMS/x86_64/*rpm ~/tmp/repo/
$ createrepo ~/tmp/repo
The files and directories within ~/tmp/repo can now be uploaded to a web server and serve as a yum repository.

3.6.1.2. Configuring your systems to use your new yum repository

Now that your yum repository is populated with RPMs and metadata we need to configure the machines that need to install $PRODUCT;. Create a file named /etc/yum.repos.d/cloudstack.repo with this information:
                        [apache-cloudstack]
                        name=Apache CloudStack
                        baseurl=http://webserver.tld/path/to/repo
                        enabled=1
                        gpgcheck=0
Completing this step will allow you to easily install $PRODUCT; on a number of machines across the network.

3.7. Building DEB packages

In addition to the bootstrap dependencies, you'll also need to install several other dependencies. Note that we recommend using Maven 3, which is not currently available in 12.04.1 LTS. So, you'll also need to add a PPA repository that includes Maven 3. After running the command add-apt-repository, you will be prompted to continue and a GPG key will be added.
        $ sudo apt-get update
        $ sudo apt-get install python-software-properties
        $ sudo add-apt-repository ppa:natecarlson/maven3
        $ sudo apt-get update
        $ sudo apt-get install ant debhelper openjdk-6-jdk tomcat6 libws-commons-util-java genisoimage python-mysqldb libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java maven3
Now that we have resolved the dependencies we can move on to building CloudStack and packaging them into DEBs.
        mvn clean install -P developer,systemvm
        $ dpkg-buildpackge -uc -us
This command will build seven Debian packages. You should have the following:
  • cloudstack-agent_4.1.0_all.deb
  • cloudstack-awsapi_4.1.0_all.deb
  • cloudstack-cli_4.1.0_all.deb
  • cloudstack-common_4.1.0_all.deb
  • cloudstack-docs_4.1.0_all.deb
  • cloudstack-management_4.1.0_all.deb
  • cloudstack-usage_4.1.0_all.deb

3.7.1. Setting up an APT repo

After you've created the packages, you'll want to copy them to a system where you can serve the packages over HTTP. You'll create a directory for the packages and then use dpkg-scanpackages to create Packages.gz, which holds information about the archive structure. Finally, you'll add the repository to your system(s) so you can install the packages using APT.
The first step is to make sure that you have the dpkg-dev package installed. This should have been installed when you pulled in the debhelper application previously, but if you're generating Packages.gz on a different system, be sure that it's installed there as well.
$ sudo apt-get install dpkg-dev
The next step is to copy the DEBs to the directory where they can be served over HTTP. We'll use /var/www/cloudstack/repo in the examples, but change the directory to whatever works for you.
            sudo mkdir -p /var/www/cloudstack/repo/binary
            sudo cp *.deb /var/www/cloudstack/repo/binary
            sudo cd /var/www/cloudstack/repo/binary
            sudo dpkg-scanpackages . /dev/null | tee Packages | gzip -9 > Packages.gz

Note: Override Files

You can safely ignore the warning about a missing override file.
Now you should have all of the DEB packages and Packages.gz in the binary directory and available over HTTP. (You may want to use wget or curl to test this before moving on to the next step.)

3.7.2. Configuring your machines to use the APT repository

Now that we have created the repository, you need to configure your machine to make use of the APT repository. You can do this by adding a repository file under /etc/apt/sources.list.d. Use your preferred editor to create /etc/apt/sources.list.d/cloudstack.list with this line:
deb http://server.url/cloudstack/repo binary ./
Now that you have the repository info in place, you'll want to run another update so that APT knows where to find the CloudStack packages.
$ sudo apt-get update
You can now move on to the instructions under Install on Ubuntu.

3.8. Prerequisites for building Apache CloudStack

There are a number of prerequisites needed to build CloudStack. This document assumes compilation on a Linux system that uses RPMs or DEBs for package management.
You will need, at a minimum, the following to compile CloudStack:
  1. Maven (version 3)
  2. Java (OpenJDK 1.6 or Java 7/OpenJDK 1.7)
  3. Apache Web Services Common Utilities (ws-commons-util)
  4. MySQL
  5. MySQLdb (provides Python database API)
  6. Tomcat 6 (not 6.0.35)
  7. genisoimage
  8. rpmbuild or dpkg-dev

Chapter 4. User Interface

4.1. Log In to the UI

CloudStack provides a web-based UI that can be used by both administrators and end users. The appropriate version of the UI is displayed depending on the credentials used to log in. The UI is available in popular browsers including IE7, IE8, IE9, Firefox 3.5+, Firefox 4, Safari 4, and Safari 5. The URL is: (substitute your own management server IP address)
http://<management-server-ip-address>:8080/client
On a fresh Management Server installation, a guided tour splash screen appears. On later visits, you’ll see a login screen where you specify the following to proceed to your Dashboard:
Username
The user ID of your account. The default username is admin.
Password
The password associated with the user ID. The password for the default username is password.
Domain
If you are a root user, leave this field blank.
If you are a user in the sub-domains, enter the full path to the domain, excluding the root domain.
For example, suppose multiple levels are created under the root domain, such as Comp1/hr. The users in the Comp1 domain should enter Comp1 in the Domain field, whereas the users in the Comp1/sales domain should enter Comp1/sales.
For more guidance about the choices that appear when you log in to this UI, see Logging In as the Root Administrator.

4.1.1. End User's UI Overview

The CloudStack UI helps users of cloud infrastructure to view and use their cloud resources, including virtual machines, templates and ISOs, data volumes and snapshots, guest networks, and IP addresses. If the user is a member or administrator of one or more CloudStack projects, the UI can provide a project-oriented view.

4.1.2. Root Administrator's UI Overview

The CloudStack UI helps the CloudStack administrator provision, view, and manage the cloud infrastructure, domains, user accounts, projects, and configuration settings. The first time you start the UI after a fresh Management Server installation, you can choose to follow a guided tour to provision your cloud infrastructure. On subsequent logins, the dashboard of the logged-in user appears. The various links in this screen and the navigation bar on the left provide access to a variety of administrative functions. The root administrator can also use the UI to perform all the same tasks that are present in the end-user’s UI.

4.1.3. Logging In as the Root Administrator

After the Management Server software is installed and running, you can run the CloudStack user interface. This UI is there to help you provision, view, and manage your cloud infrastructure.
  1. Open your favorite Web browser and go to this URL. Substitute the IP address of your own Management Server:
    http://<management-server-ip-address>:8080/client
    After logging into a fresh Management Server installation, a guided tour splash screen appears. On later visits, you’ll be taken directly into the Dashboard.
  2. If you see the first-time splash screen, choose one of the following.
    • Continue with basic setup. Choose this if you're just trying CloudStack, and you want a guided walkthrough of the simplest possible configuration so that you can get started right away. We'll help you set up a cloud with the following features: a single machine that runs CloudStack software and uses NFS to provide storage; a single machine running VMs under the XenServer or KVM hypervisor; and a shared public network.
      The prompts in this guided tour should give you all the information you need, but if you want just a bit more detail, you can follow along in the Trial Installation Guide.
    • I have used CloudStack before. Choose this if you have already gone through a design phase and planned a more sophisticated deployment, or you are ready to start scaling up a trial cloud that you set up earlier with the basic setup screens. In the Administrator UI, you can start using the more powerful features of CloudStack, such as advanced VLAN networking, high availability, additional network elements such as load balancers and firewalls, and support for multiple hypervisors including Citrix XenServer, KVM, and VMware vSphere.
      The root administrator Dashboard appears.
  3. You should set a new root administrator password. If you chose basic setup, you’ll be prompted to create a new password right away. If you chose experienced user, use the steps in Section 4.1.4, “Changing the Root Password”.

Warning

You are logging in as the root administrator. This account manages the CloudStack deployment, including physical infrastructure. The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person. Please change the default password to a new, unique password.

4.1.4. Changing the Root Password

During installation and ongoing cloud administration, you will need to log in to the UI as the root administrator. The root administrator account manages the CloudStack deployment, including physical infrastructure. The root administrator can modify configuration settings to change basic functionality, create or delete user accounts, and take many actions that should be performed only by an authorized person. When first installing CloudStack, be sure to change the default password to a new, unique value.
  1. Open your favorite Web browser and go to this URL. Substitute the IP address of your own Management Server:
    http://<management-server-ip-address>:8080/client
  2. Log in to the UI using the current root user ID and password. The default is admin, password.
  3. Click Accounts.
  4. Click the admin account name.
  5. Click View Users.
  6. Click the admin user name.
  7. Click the Change Password button. change-password.png: button to change a user's password
  8. Type the new password, and click OK.

4.2. Using SSH Keys for Authentication

In addition to the username and password authentication, CloudStack supports using SSH keys to log in to the cloud infrastructure for additional security. You can use the createSSHKeyPair API to generate the SSH keys.
Because each cloud user has their own SSH key, one cloud user cannot log in to another cloud user's instances unless they share their SSH key files. Using a single SSH key pair, you can manage multiple instances.

4.2.1.  Creating an Instance Template that Supports SSH Keys

Create a instance template that supports SSH Keys.
  1. Create a new instance by using the template provided by cloudstack.
    For more information on creating a new instance, see
  2. Download the cloudstack script from The SSH Key Gen Scriptto the instance you have created.
    wget http://downloads.sourceforge.net/project/cloudstack/SSH%20Key%20Gen%20Script/cloud-set-guest-sshkey.in?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fcloudstack%2Ffiles%2FSSH%2520Key%2520Gen%2520Script%2F&ts=1331225219&use_mirror=iweb
  3. Copy the file to /etc/init.d.
    cp cloud-set-guest-sshkey.in /etc/init.d/
  4. Give the necessary permissions on the script:
    chmod +x /etc/init.d/cloud-set-guest-sshkey.in
  5. Run the script while starting up the operating system:
    chkconfig --add cloud-set-guest-sshkey.in
  6. Stop the instance.

4.2.2. Creating the SSH Keypair

You must make a call to the createSSHKeyPair api method. You can either use the CloudStack Python API library or the curl commands to make the call to the cloudstack api.
For example, make a call from the cloudstack server to create a SSH keypair called "keypair-doc" for the admin account in the root domain:

Note

Ensure that you adjust these values to meet your needs. If you are making the API call from a different server, your URL/PORT will be different, and you will need to use the API keys.
  1. Run the following curl command:
    curl --globoff "http://localhost:8096/?command=createSSHKeyPair&name=keypair-doc&account=admin&domainid=5163440e-c44b-42b5-9109-ad75cae8e8a2"
    The output is something similar to what is given below:
    <?xml version="1.0" encoding="ISO-8859-1"?><createsshkeypairresponse cloud-stack-version="3.0.0.20120228045507"><keypair><name>keypair-doc</name><fingerprint>f6:77:39:d5:5e:77:02:22:6a:d8:7f:ce:ab:cd:b3:56</fingerprint><privatekey>-----BEGIN RSA PRIVATE KEY-----
    MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci
    dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB
    AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu
    mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy
    QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7
    VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK
    lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm
    nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14
    4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd
    KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
    -----END RSA PRIVATE KEY-----
    </privatekey></keypair></createsshkeypairresponse>
  2. Copy the key data into a file. The file looks like this:
    -----BEGIN RSA PRIVATE KEY-----
    MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci
    dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB
    AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu
    mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy
    QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7
    VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK
    lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm
    nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14
    4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd
    KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
    -----END RSA PRIVATE KEY-----
  3. Save the file.

4.2.3. Creating an Instance

After you save the SSH keypair file, you must create an instance by using the template that you created at Section 4.2.1, “ Creating an Instance Template that Supports SSH Keys”. Ensure that you use the same SSH key name that you created at Section 4.2.2, “Creating the SSH Keypair”.

Note

You cannot create the instance by using the GUI at this time and associate the instance with the newly created SSH keypair.
A sample curl command to create a new instance is:
curl --globoff http://localhost:<port number>/?command=deployVirtualMachine\&zoneId=1\&serviceOfferingId=18727021-7556-4110-9322-d625b52e0813\&templateId=e899c18a-ce13-4bbf-98a9-625c5026e0b5\&securitygroupids=ff03f02f-9e3b-48f8-834d-91b822da40c5\&account=admin\&domainid=1\&keypair=keypair-doc
Substitute the template, service offering and security group IDs (if you are using the security group feature) that are in your cloud environment.

4.2.4. Logging In Using the SSH Keypair

To test your SSH key generation is successful, check whether you can log in to the cloud setup.
For exaple, from a Linux OS, run:
ssh -i ~/.ssh/keypair-doc <ip address>
The -i parameter tells the ssh client to use a ssh key found at ~/.ssh/keypair-doc.

4.2.5. Resetting SSH Keys

With the API command resetSSHKeyForVirtualMachine, a user can set or reset the SSH keypair assigned to a virtual machine. A lost or compromised SSH keypair can be changed, and the user can access the VM by using the new keypair. Just create or register a new keypair, then call resetSSHKeyForVirtualMachine.

Chapter 5. Steps to Provisioning Your Cloud Infrastructure

This section tells how to add regions, zones, pods, clusters, hosts, storage, and networks to your cloud. If you are unfamiliar with these entities, please begin by looking through Chapter 2, Cloud Infrastructure Concepts.

5.1. Overview of Provisioning Steps

After the Management Server is installed and running, you can add the compute resources for it to manage. For an overview of how a CloudStack cloud infrastructure is organized, see Section 1.3.2, “Cloud Infrastructure Overview”.
To provision the cloud infrastructure, or to scale it up at any time, follow these procedures:
  1. Define regions (optional). See Section 5.2, “Adding Regions (optional)”.
  2. Add a zone to the region. See Section 5.3, “Adding a Zone”.
  3. Add more pods to the zone (optional). See Section 5.4, “Adding a Pod”.
  4. Add more clusters to the pod (optional). See Section 5.5, “Adding a Cluster”.
  5. Add more hosts to the cluster (optional). See Section 5.6, “Adding a Host”.
  6. Add primary storage to the cluster. See Section 5.7, “Add Primary Storage”.
  7. Add secondary storage to the zone. See Section 5.8, “Add Secondary Storage”.
  8. Initialize and test the new cloud. See Section 5.9, “Initialize and Test”.
When you have finished these steps, you will have a deployment with the following basic structure:
provisioning-overview.png: Conceptual overview of a basic deployment

5.2. Adding Regions (optional)

Grouping your cloud resources into geographic regions is an optional step when provisioning the cloud. For an overview of regions, see Section 2.1, “About Regions”.

5.2.1. The First Region: The Default Region

If you do not take action to define regions, then all the zones in your cloud will be automatically grouped into a single default region. This region is assigned the region ID of 1.
You can change the name or URL of the default region by using the API command updateRegion. For example:
http://<IP_of_Management_Server>:8080/client/api?command=updateRegion&id=1&name=Northern&endpoint=http://<region_1_IP_address_here>:8080/client&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D

5.2.2. Adding a Region

Use these steps to add a second region in addition to the default region.
  1. Each region has its own CloudStack instance. Therefore, the first step of creating a new region is to install the Management Server software, on one or more nodes, in the geographic area where you want to set up the new region. Use the steps in the Installation guide. When you come to the step where you set up the database, use the additional command-line flag -r <region_id> to set a region ID for the new region. The default region is automatically assigned a region ID of 1, so your first additional region might be region 2.
    cloudstack-setup-databases cloud:<dbpassword>@localhost --deploy-as=root:<password> -e <encryption_type> -m <management_server_key> -k <database_key> -r <region_id>
  2. By the end of the installation procedure, the Management Server should have been started. Be sure that the Management Server installation was successful and complete.
  3. Add region 2 to region 1. Use the API command addRegion. (For information about how to make an API call, see the Developer's Guide.)
    http://<IP_of_region_1_Management_Server>:8080/client/api?command=addRegion&id=2&name=Western&endpoint=http://<region_2_IP_address_here>:8080/client&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
  4. Now perform the same command in reverse, adding region 1 to region 2.
    http://<IP_of_region_2_Management_Server>:8080/client/api?command=addRegion&id=1&name=Northern&endpoint=http://<region_1_IP_address_here>:8080/client&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
  5. Copy the account, user, and domain tables from the region 1 database to the region 2 database.
    In the following commands, it is assumed that you have set the root password on the database, which is a CloudStack recommended best practice. Substitute your own MySQL root password.
    1. First, run this command to copy the contents of the database:
      # mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > region1.sql
    2. Then run this command to put the data onto the region 2 database:
      # mysql -u root -p<mysql_password> -h <region2_db_host> cloud < region1.sql
  6. Remove project accounts. Run these commands on the region 2 database:
    mysql> delete from account where type = 5;
  7. Set the default zone as null:
    mysql> update account set default_zone_id = null;
  8. Restart the Management Servers in region 2.

5.2.3. Adding Third and Subsequent Regions

To add the third region, and subsequent additional regions, the steps are similar to those for adding the second region. However, you must repeat certain steps additional times for each additional region:
  1. Install CloudStack in each additional region. Set the region ID for each region during the database setup step.
    cloudstack-setup-databases cloud:<dbpassword>@localhost --deploy-as=root:<password> -e <encryption_type> -m <management_server_key> -k <database_key> -r <region_id>
  2. Once the Management Server is running, add your new region to all existing regions by repeatedly calling the API command addRegion. For example, if you were adding region 3:
    http://<IP_of_region_1_Management_Server>:8080/client/api?command=addRegion&id=3&name=Eastern&endpoint=http://<region_3_IP_address_here>:8080/client&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
    
    http://<IP_of_region_2_Management_Server>:8080/client/api?command=addRegion&id=3&name=Eastern&endpoint=http://<region_3_IP_address_here>:8080/client&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
  3. Repeat the procedure in reverse to add all existing regions to the new region. For example, for the third region, add the other two existing regions:
    http://<IP_of_region_3_Management_Server>:8080/client/api?command=addRegion&id=1&name=Northern&endpoint=http://<region_1_IP_address_here>:8080/client&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
    
    http://<IP_of_region_3_Management_Server>:8080/client/api?command=addRegion&id=2&name=Western&endpoint=http://<region_2_IP_address_here>:8080/client&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
  4. Copy the account, user, and domain tables from any existing region's database to the new region's database.
    In the following commands, it is assumed that you have set the root password on the database, which is a CloudStack recommended best practice. Substitute your own MySQL root password.
    1. First, run this command to copy the contents of the database:
      # mysqldump -u root -p<mysql_password> -h <region1_db_host> cloud account user domain > region1.sql
    2. Then run this command to put the data onto the new region's database. For example, for region 3:
      # mysql -u root -p<mysql_password> -h <region3_db_host> cloud < region1.sql
  5. Remove project accounts. Run these commands on the region 2 database:
    mysql> delete from account where type = 5;
  6. Set the default zone as null:
    mysql> update account set default_zone_id = null;
  7. Restart the Management Servers in the new region.

5.2.4. Deleting a Region

To delete a region, use the API command removeRegion. Repeat the call to remove the region from all other regions. For example, to remove the 3rd region in a three-region cloud:
http://<IP_of_region_1_Management_Server>:8080/client/api?command=removeRegion&id=3&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D

http://<IP_of_region_2_Management_Server>:8080/client/api?command=removeRegion&id=3&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D

5.3. Adding a Zone

These steps assume you have already logged in to the CloudStack UI. See Section 4.1, “Log In to the UI”.
  1. (Optional) If you are going to use Swift for cloud-wide secondary storage, you need to add it before you add zones.
    1. Log in to the CloudStack UI as administrator.
    2. If this is your first time visiting the UI, you will see the guided tour splash screen. Choose “Experienced user.” The Dashboard appears.
    3. In the left navigation bar, click Global Settings.
    4. In the search box, type swift.enable and click the search button.
    5. Click the edit button and set swift.enable to true. edit-icon.png: button to modify data
    6. Restart the Management Server.
      # service cloud-management restart
    7. Refresh the CloudStack UI browser tab and log back in.
  2. In the left navigation, choose Infrastructure.
  3. On Zones, click View More.
  4. (Optional) If you are using Swift storage, click Enable Swift. Provide the following:
    • URL. The Swift URL.
    • Account. The Swift account.
    • Username. The Swift account’s username.
    • Key. The Swift key.
  5. Click Add Zone. The zone creation wizard will appear.
  6. Choose one of the following network types:
    • Basic. For AWS-style networking. Provides a single network where each VM instance is assigned an IP directly from the network. Guest isolation can be provided through layer-3 means such as security groups (IP address source filtering).
    • Advanced. For more sophisticated network topologies. This network model provides the most flexibility in defining guest networks and providing custom network offerings such as firewall, VPN, or load balancer support.
    For more information about the network types, see Section 2.8, “About Physical Networks”.
  7. The rest of the steps differ depending on whether you chose Basic or Advanced. Continue with the steps that apply to you:

5.3.1. Basic Zone Configuration

  1. After you select Basic in the Add Zone wizard and click Next, you will be asked to enter the following details. Then click Next.
    • Name. A name for the zone.
    • DNS 1 and 2. These are DNS servers for use by guest VMs in the zone. These DNS servers will be accessed via the public network you will add later. The public IP addresses for the zone must have a route to the DNS server named here.
    • Internal DNS 1 and Internal DNS 2. These are DNS servers for use by system VMs in the zone (these are VMs used by CloudStack itself, such as virtual routers, console proxies, and Secondary Storage VMs.) These DNS servers will be accessed via the management traffic network interface of the System VMs. The private IP address you provide for the pods must have a route to the internal DNS server named here.
    • Hypervisor. (Introduced in version 3.0.1) Choose the hypervisor for the first cluster in the zone. You can add clusters with different hypervisors later, after you finish adding the zone.
    • Network Offering. Your choice here determines what network services will be available on the network for guest VMs.
      Network Offering
      Description
      DefaultSharedNetworkOfferingWithSGService
      If you want to enable security groups for guest traffic isolation, choose this. (See Using Security Groups to Control Traffic to VMs.)
      DefaultSharedNetworkOffering
      If you do not need security groups, choose this.
      DefaultSharedNetscalerEIPandELBNetworkOffering
      If you have installed a Citrix NetScaler appliance as part of your zone network, and you will be using its Elastic IP and Elastic Load Balancing features, choose this. With the EIP and ELB features, a basic zone with security groups enabled can offer 1:1 static NAT and load balancing.
    • Network Domain. (Optional) If you want to assign a special domain name to the guest VM network, specify the DNS suffix.
    • Public. A public zone is available to all users. A zone that is not public will be assigned to a particular domain. Only users in that domain will be allowed to create guest VMs in this zone.
  2. Choose which traffic types will be carried by the physical network.
    The traffic types are management, public, guest, and storage traffic. For more information about the types, roll over the icons to display their tool tips, or see Basic Zone Network Traffic Types. This screen starts out with some traffic types already assigned. To add more, drag and drop traffic types onto the network. You can also change the network name if desired.
  3. 3. (Introduced in version 3.0.1) Assign a network traffic label to each traffic type on the physical network. These labels must match the labels you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type icon. A popup dialog appears where you can type the label, then click OK.
    These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors, the labels can be configured after the zone is created.
  4. Click Next.
  5. (NetScaler only) If you chose the network offering for NetScaler, you have an additional screen to fill out. Provide the requested details to set up the NetScaler, then click Next.
    • IP address. The NSIP (NetScaler IP) address of the NetScaler device.
    • Username/Password. The authentication credentials to access the device. CloudStack uses these credentials to access the device.
    • Type. NetScaler device type that is being added. It could be NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the types, see About Using a NetScaler Load Balancer.
    • Public interface. Interface of NetScaler that is configured to be part of the public network.
    • Private interface. Interface of NetScaler that is configured to be part of the private network.
    • Number of retries. Number of times to attempt a command on the device before considering the operation failed. Default is 2.
    • Capacity. Number of guest networks/accounts that will share this NetScaler device.
    • Dedicated. When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance – implicitly, its value is 1.
  6. (NetScaler only) Configure the IP range for public traffic. The IPs in this range will be used for the static NAT capability which you enabled by selecting the network offering for NetScaler with EIP and ELB. Enter the following details, then click Add. If desired, you can repeat this step to add more IP ranges. When done, click Next.
    • Gateway. The gateway in use for these IP addresses.
    • Netmask. The netmask associated with this IP range.
    • VLAN. The VLAN that will be used for public traffic.
    • Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest VMs.
  7. In a new zone, CloudStack adds the first pod for you. You can always add more pods later. For an overview of what a pod is, see Section 2.3, “About Pods”.
    To configure the first pod, enter the following, then click Next:
    • Pod Name. A name for the pod.
    • Reserved system gateway. The gateway for the hosts in that pod.
    • Reserved system netmask. The network prefix that defines the pod's subnet. Use CIDR notation.
    • Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses.
  8. Configure the network for guest traffic. Provide the following, then click Next:
    • Guest gateway. The gateway that the guests should use.
    • Guest netmask. The netmask in use on the subnet the guests will use.
    • Guest start IP/End IP. Enter the first and last IP addresses that define a range that CloudStack can assign to guests.
      • We strongly recommend the use of multiple NICs. If multiple NICs are used, they may be in a different subnet.
      • If one NIC is used, these IPs should be in the same CIDR as the pod CIDR.
  9. In a new pod, CloudStack adds the first cluster for you. You can always add more clusters later. For an overview of what a cluster is, see About Clusters.
    To configure the first cluster, enter the following, then click Next:
    • Hypervisor. (Version 3.0.0 only; in 3.0.1, this field is read only) Choose the type of hypervisor software that all hosts in this cluster will run. If you choose VMware, additional fields appear so you can give information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. See Add Cluster: vSphere.
    • Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack.
  10. In a new cluster, CloudStack adds the first host for you. You can always add more hosts later. For an overview of what a host is, see About Hosts.

    Note

    When you add a hypervisor host to CloudStack, the host must not have any VMs already running.
    Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see:
    • Citrix XenServer Installation and Configuration
    • VMware vSphere Installation and Configuration
    • KVM vSphere Installation and Configuration
    To configure the first host, enter the following, then click Next:
    • Host Name. The DNS name or IP address of the host.
    • Username. The username is root.
    • Password. This is the password for the user named above (from your XenServer or KVM install).
    • Host Tags. (Optional) Any labels that you use to categorize hosts for ease of maintenance. For example, you can set this to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For more information, see HA-Enabled Virtual Machines as well as HA for Hosts.
  11. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers later. For an overview of what primary storage is, see About Primary Storage.
    To configure the first primary storage server, enter the following, then click Next:
    • Name. The name of the storage device.
    • Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS, SharedMountPoint,CLVM, or RBD. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here.

5.3.2. Advanced Zone Configuration

  1. After you select Advanced in the Add Zone wizard and click Next, you will be asked to enter the following details. Then click Next.
    • Name. A name for the zone.
    • DNS 1 and 2. These are DNS servers for use by guest VMs in the zone. These DNS servers will be accessed via the public network you will add later. The public IP addresses for the zone must have a route to the DNS server named here.
    • Internal DNS 1 and Internal DNS 2. These are DNS servers for use by system VMs in the zone(these are VMs used by CloudStack itself, such as virtual routers, console proxies,and Secondary Storage VMs.) These DNS servers will be accessed via the management traffic network interface of the System VMs. The private IP address you provide for the pods must have a route to the internal DNS server named here.
    • Network Domain. (Optional) If you want to assign a special domain name to the guest VM network, specify the DNS suffix.
    • Guest CIDR. This is the CIDR that describes the IP addresses in use in the guest virtual networks in this zone. For example, 10.1.1.0/24. As a matter of good practice you should set different CIDRs for different zones. This will make it easier to set up VPNs between networks in different zones.
    • Hypervisor. (Introduced in version 3.0.1) Choose the hypervisor for the first cluster in the zone. You can add clusters with different hypervisors later, after you finish adding the zone.
    • Public. A public zone is available to all users. A zone that is not public will be assigned to a particular domain. Only users in that domain will be allowed to create guest VMs in this zone.
  2. Choose which traffic types will be carried by the physical network.
    The traffic types are management, public, guest, and storage traffic. For more information about the types, roll over the icons to display their tool tips, or see Section 2.8.3, “Advanced Zone Network Traffic Types”. This screen starts out with one network already configured. If you have multiple physical networks, you need to add more. Drag and drop traffic types onto a greyed-out network and it will become active. You can move the traffic icons from one network to another; for example, if the default traffic types shown for Network 1 do not match your actual setup, you can move them down. You can also change the network names if desired.
  3. (Introduced in version 3.0.1) Assign a network traffic label to each traffic type on each physical network. These labels must match the labels you have already defined on the hypervisor host. To assign each label, click the Edit button under the traffic type icon within each physical network. A popup dialog appears where you can type the label, then click OK.
    These traffic labels will be defined only for the hypervisor selected for the first cluster. For all other hypervisors, the labels can be configured after the zone is created.
  4. Click Next.
  5. Configure the IP range for public Internet traffic. Enter the following details, then click Add. If desired, you can repeat this step to add more public Internet IP ranges. When done, click Next.
    • Gateway. The gateway in use for these IP addresses.
    • Netmask. The netmask associated with this IP range.
    • VLAN. The VLAN that will be used for public traffic.
    • Start IP/End IP. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks.
  6. In a new zone, CloudStack adds the first pod for you. You can always add more pods later. For an overview of what a pod is, see Section 2.3, “About Pods”.
    To configure the first pod, enter the following, then click Next:
    • Pod Name. A name for the pod.
    • Reserved system gateway. The gateway for the hosts in that pod.
    • Reserved system netmask. The network prefix that defines the pod's subnet. Use CIDR notation.
    • Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see Section 2.8.6, “System Reserved IP Addresses”.
  7. Specify a range of VLAN IDs to carry guest traffic for each physical network (see VLAN Allocation Example ), then click Next.
  8. In a new pod, CloudStack adds the first cluster for you. You can always add more clusters later. For an overview of what a cluster is, see Section 2.4, “About Clusters”.
    To configure the first cluster, enter the following, then click Next:
    • Hypervisor. (Version 3.0.0 only; in 3.0.1, this field is read only) Choose the type of hypervisor software that all hosts in this cluster will run. If you choose VMware, additional fields appear so you can give information about a vSphere cluster. For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. See Add Cluster: vSphere .
    • Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack.
  9. In a new cluster, CloudStack adds the first host for you. You can always add more hosts later. For an overview of what a host is, see Section 2.5, “About Hosts”.

    Note

    When you deploy CloudStack, the hypervisor host must not have any VMs already running.
    Before you can configure the host, you need to install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see:
    • Citrix XenServer Installation for CloudStack
    • VMware vSphere Installation and Configuration
    • KVM Installation and Configuration
    To configure the first host, enter the following, then click Next:
    • Host Name. The DNS name or IP address of the host.
    • Username. Usually root.
    • Password. This is the password for the user named above (from your XenServer or KVM install).
    • Host Tags. (Optional) Any labels that you use to categorize hosts for ease of maintenance. For example, you can set to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For more information, see HA-Enabled Virtual Machines as well as HA for Hosts, both in the Administration Guide.
  10. In a new cluster, CloudStack adds the first primary storage server for you. You can always add more servers later. For an overview of what primary storage is, see Section 2.6, “About Primary Storage”.
    To configure the first primary storage server, enter the following, then click Next:
    • Name. The name of the storage device.
    • Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS, SharedMountPoint, CLVM, and RBD. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS. The remaining fields in the screen vary depending on what you choose here.
      NFS
      • Server. The IP address or DNS name of the storage device.
      • Path. The exported path from the server.
      • Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
      The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
      iSCSI
      • Server. The IP address or DNS name of the storage device.
      • Target IQN. The IQN of the target. For example, iqn.1986-03.com.sun:02:01ec9bb549-1271378984.
      • Lun. The LUN number. For example, 3.
      • Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
      The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
      preSetup
      • Server. The IP address or DNS name of the storage device.
      • SR Name-Label. Enter the name-label of the SR that has been set up outside CloudStack.
      • Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
      The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
      SharedMountPoint
      • Path. The path on each host that is where this primary storage is mounted. For example, "/mnt/primary".
      • Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
      The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
      VMFS
      • Server. The IP address or DNS name of the vCenter server.
      • Path. A combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/cluster1datastore".
      • Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings.
      The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
  11. In a new zone, CloudStack adds the first secondary storage server for you. For an overview of what secondary storage is, see Section 2.7, “About Secondary Storage”.
    Before you can fill out this screen, you need to prepare the secondary storage by setting up NFS shares and installing the latest CloudStack System VM template. See Adding Secondary Storage :
    • NFS Server. The IP address of the server or fully qualified domain name of the server.
    • Path. The exported path from the server.
  12. Click Launch.

5.4. Adding a Pod

When you created a new zone, CloudStack adds the first pod for you. You can add more pods at any time using the procedure in this section.
  1. Log in to the CloudStack UI. See Section 4.1, “Log In to the UI”.
  2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone to which you want to add a pod.
  3. Click the Compute and Storage tab. In the Pods node of the diagram, click View All.
  4. Click Add Pod.
  5. Enter the following details in the dialog.
    • Name. The name of the pod.
    • Gateway. The gateway for the hosts in that pod.
    • Netmask. The network prefix that defines the pod's subnet. Use CIDR notation.
    • Start/End Reserved System IP. The IP range in the management network that CloudStack uses to manage various system VMs, such as Secondary Storage VMs, Console Proxy VMs, and DHCP. For more information, see System Reserved IP Addresses.
  6. Click OK.

5.5. Adding a Cluster

You need to tell CloudStack about the hosts that it will manage. Hosts exist inside clusters, so before you begin adding hosts to the cloud, you must add at least one cluster.

5.5.1. Add Cluster: KVM or XenServer

These steps assume you have already installed the hypervisor on the hosts and logged in to the CloudStack UI.
  1. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the cluster.
  2. Click the Compute tab.
  3. In the Clusters node of the diagram, click View All.
  4. Click Add Cluster.
  5. Choose the hypervisor type for this cluster.
  6. Choose the pod in which you want to create the cluster.
  7. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack.
  8. Click OK.

5.5.2. Add Cluster: vSphere

Host management for vSphere is done through a combination of vCenter and the CloudStack admin UI. CloudStack requires that all hosts be in a CloudStack cluster, but the cluster may consist of a single host. As an administrator you must decide if you would like to use clusters of one host or of multiple hosts. Clusters of multiple hosts allow for features like live migration. Clusters also require shared storage such as NFS or iSCSI.
For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. Follow these requirements:
  • Do not put more than 8 hosts in a vSphere cluster
  • Make sure the hypervisor hosts do not have any VMs already running before you add them to CloudStack.
To add a vSphere cluster to CloudStack:
  1. Create the cluster of hosts in vCenter. Follow the vCenter instructions to do this. You will create a cluster that looks something like this in vCenter.
    vsphereclient.png: vSphere client
  2. Log in to the UI.
  3. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the cluster.
  4. Click the Compute tab, and click View All on Pods. Choose the pod to which you want to add the cluster.
  5. Click View Clusters.
  6. Click Add Cluster.
  7. In Hypervisor, choose VMware.
  8. Provide the following information in the dialog. The fields below make reference to values from vCenter.
    • Cluster Name. Enter the name of the cluster you created in vCenter. For example, "cloud.cluster.2.2.1"
    • vCenter Host. Enter the hostname or IP address of the vCenter server.
    • vCenter Username. Enter the username that CloudStack should use to connect to vCenter. This user must have all administrative privileges.
    • vCenter Password. Enter the password for the user named above
    • vCenter Datacenter. Enter the vCenter datacenter that the cluster is in. For example, "cloud.dc.VM".
    • addcluster.png: add cluster
      There might be a slight delay while the cluster is provisioned. It will automatically display in the UI

5.6. Adding a Host

  1. Before adding a host to the CloudStack configuration, you must first install your chosen hypervisor on the host. CloudStack can manage hosts running VMs under a variety of hypervisors.
    The CloudStack Installation Guide provides instructions on how to install each supported hypervisor and configure it for use with CloudStack. See the appropriate section in the Installation Guide for information about which version of your chosen hypervisor is supported, as well as crucial additional steps to configure the hypervisor hosts for use with CloudStack.

    Warning

    Be sure you have performed the additional CloudStack-specific configuration steps described in the hypervisor installation section for your particular hypervisor.
  2. Now add the hypervisor host to CloudStack. The technique to use varies depending on the hypervisor.

5.6.1. Adding a Host (XenServer or KVM)

XenServer and KVM hosts can be added to a cluster at any time.

5.6.1.1. Requirements for XenServer and KVM Hosts

Warning

Make sure the hypervisor host does not have any VMs already running before you add it to CloudStack.
Configuration requirements:
  • Each cluster must contain only hosts with the identical hypervisor.
  • For XenServer, do not put more than 8 hosts in a cluster.
  • For KVM, do not put more than 16 hosts in a cluster.
For hardware requirements, see the installation section for your hypervisor in the CloudStack Installation Guide.
5.6.1.1.1. XenServer Host Additional Requirements
If network bonding is in use, the administrator must cable the new host identically to other hosts in the cluster.
For all additional hosts to be added to the cluster, run the following command. This will cause the host to join the master in a XenServer pool.
# xe pool-join master-address=[master IP] master-username=root master-password=[your password]

Note

When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
With all hosts added to the XenServer pool, run the cloud-setup-bond script. This script will complete the configuration and setup of the bonds on the new hosts in the cluster.
  1. Copy the script from the Management Server in /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable.
  2. Run the script:
    # ./cloud-setup-bonding.sh
5.6.1.1.2. KVM Host Additional Requirements
  • If shared mountpoint storage is in use, the administrator should ensure that the new host has all the same mountpoints (with storage mounted) as the other hosts in the cluster.
  • Make sure the new host has the same network configuration (guest, private, and public network) as other hosts in the cluster.
  • If you are using OpenVswitch bridges edit the file agent.properties on the KVM host and set the parameter network.bridge.type to openvswitch before adding the host to CloudStack

5.6.1.2. Adding a XenServer or KVM Host

  • If you have not already done so, install the hypervisor software on the host. You will need to know which version of the hypervisor software version is supported by CloudStack and what additional configuration is required to ensure the host will work with CloudStack. To find these installation details, see the appropriate section for your hypervisor in the CloudStack Installation Guide.
  • Log in to the CloudStack UI as administrator.
  • In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the host.
  • Click the Compute tab. In the Clusters node, click View All.
  • Click the cluster where you want to add the host.
  • Click View Hosts.
  • Click Add Host.
  • Provide the following information.
    • Host Name. The DNS name or IP address of the host.
    • Username. Usually root.
    • Password. This is the password for the user from your XenServer or KVM install).
    • Host Tags (Optional). Any labels that you use to categorize hosts for ease of maintenance. For example, you can set to the cloud's HA tag (set in the ha.tag global configuration parameter) if you want this host to be used only for VMs with the "high availability" feature enabled. For more information, see HA-Enabled Virtual Machines as well as HA for Hosts.
    There may be a slight delay while the host is provisioned. It should automatically display in the UI.
  • Repeat for additional hosts.

5.6.2. Adding a Host (vSphere)

For vSphere servers, we recommend creating the cluster of hosts in vCenter and then adding the entire cluster to CloudStack. See Add Cluster: vSphere.

5.7. Add Primary Storage

5.7.1. System Requirements for Primary Storage

Hardware requirements:
  • Any standards-compliant iSCSI or NFS server that is supported by the underlying hypervisor.
  • The storage server should be a machine with a large number of disks. The disks should ideally be managed by a hardware RAID controller.
  • Minimum required capacity depends on your needs.
When setting up primary storage, follow these restrictions:
  • Primary storage cannot be added until a host has been added to the cluster.
  • If you do not provision shared primary storage, you must set the global configuration parameter system.vm.local.storage.required to true, or else you will not be able to start VMs.

5.7.2. Adding Primary Stroage

When you create a new zone, the first primary storage is added as part of that procedure. You can add primary storage servers at any time, such as when adding a new cluster or adding more servers to an existing cluster.

Warning

Be sure there is nothing stored on the server. Adding the server to CloudStack will destroy any existing data.
  1. Log in to the CloudStack UI (see Section 4.1, “Log In to the UI”).
  2. In the left navigation, choose Infrastructure. In Zones, click View More, then click the zone in which you want to add the primary storage.
  3. Click the Compute tab.
  4. In the Primary Storage node of the diagram, click View All.
  5. Click Add Primary Storage.
  6. Provide the following information in the dialog. The information required varies depending on your choice in Protocol.
    • Pod. The pod for the storage device.
    • Cluster. The cluster for the storage device.
    • Name. The name of the storage device.
    • Protocol. For XenServer, choose either NFS, iSCSI, or PreSetup. For KVM, choose NFS or SharedMountPoint. For vSphere choose either VMFS (iSCSI or FiberChannel) or NFS.
    • Server (for NFS, iSCSI, or PreSetup). The IP address or DNS name of the storage device.
    • Server (for VMFS). The IP address or DNS name of the vCenter server.
    • Path (for NFS). In NFS this is the exported path from the server.
    • Path (for VMFS). In vSphere this is a combination of the datacenter name and the datastore name. The format is "/" datacenter name "/" datastore name. For example, "/cloud.dc.VM/cluster1datastore".
    • Path (for SharedMountPoint). With KVM this is the path on each host that is where this primary storage is mounted. For example, "/mnt/primary".
    • SR Name-Label (for PreSetup). Enter the name-label of the SR that has been set up outside CloudStack.
    • Target IQN (for iSCSI). In iSCSI this is the IQN of the target. For example, iqn.1986-03.com.sun:02:01ec9bb549-1271378984.
    • Lun # (for iSCSI). In iSCSI this is the LUN number. For example, 3.
    • Tags (optional). The comma-separated list of tags for this storage device. It should be an equivalent set or superset of the tags on your disk offerings..
    The tag sets on primary storage across clusters in a Zone must be identical. For example, if cluster A provides primary storage that has tags T1 and T2, all other clusters in the Zone must also provide primary storage that has tags T1 and T2.
  7. Click OK.

5.8. Add Secondary Storage

5.8.1. System Requirements for Secondary Storage

  • NFS storage appliance or Linux NFS server
  • (Optional) OpenStack Object Storage (Swift) (see http://swift.openstack.org)
  • 100GB minimum capacity
  • A secondary storage device must be located in the same zone as the guest VMs it serves.
  • Each Secondary Storage server must be available to all hosts in the zone.

5.8.2. Adding Secondary Storage

When you create a new zone, the first secondary storage is added as part of that procedure. You can add secondary storage servers at any time to add more servers to an existing zone.

Warning

Be sure there is nothing stored on the server. Adding the server to CloudStack will destroy any existing data.
  1. If you are going to use Swift for cloud-wide secondary storage, you must add the Swift storage to CloudStack before you add the local zone secondary storage servers. See Section 5.3, “Adding a Zone”.
  2. To prepare for local zone secondary storage, you should have created and mounted an NFS share during Management Server installation. See Section 3.5.6, “Prepare NFS Shares”.
  3. Make sure you prepared the system VM template during Management Server installation. See Section 3.5.8, “Prepare the System VM Template”.
  4. Now that the secondary storage server for per-zone storage is prepared, add it to CloudStack. Secondary storage is added as part of the procedure for adding a new zone. See Section 5.3, “Adding a Zone”.

5.9. Initialize and Test

After everything is configured, CloudStack will perform its initialization. This can take 30 minutes or more, depending on the speed of your network. When the initialization has completed successfully, the administrator's Dashboard should be displayed in the CloudStack UI.
  1. Verify that the system is ready. In the left navigation bar, select Templates. Click on the CentOS 5.5 (64bit) no Gui (KVM) template. Check to be sure that the status is "Download Complete." Do not proceed to the next step until this status is displayed.
  2. Go to the Instances tab, and filter by My Instances.
  3. Click Add Instance and follow the steps in the wizard.
    1. Choose the zone you just added.
    2. In the template selection, choose the template to use in the VM. If this is a fresh installation, likely only the provided CentOS template is available.
    3. Select a service offering. Be sure that the hardware you have allows starting the selected service offering.
    4. In data disk offering, if desired, add another data disk. This is a second volume that will be available to but not mounted in the guest. For example, in Linux on XenServer you will see /dev/xvdb in the guest after rebooting the VM. A reboot is not required if you have a PV-enabled OS kernel in use.
    5. In default network, choose the primary network for the guest. In a trial installation, you would have only one option here.
    6. Optionally give your VM a name and a group. Use any descriptive text you would like.
    7. Click Launch VM. Your VM will be created and started. It might take some time to download the template and complete the VM startup. You can watch the VM’s progress in the Instances screen.
  4. To use the VM, click the View Console button. ConsoleButton.png: button to launch a console
    For more information about using VMs, including instructions for how to allow incoming network traffic to the VM, start, stop, and delete VMs, and move a VM from one host to another, see Working With Virtual Machines in the Administrator’s Guide.
Congratulations! You have successfully completed a CloudStack Installation.
If you decide to grow your deployment, you can add more hosts, primary storage, zones, pods, and clusters.

Chapter 6. Global Configuration Parameters

6.1. Setting Global Configuration Parameters

CloudStack provides parameters that you can set to control many aspects of the cloud. When CloudStack is first installed, and periodically thereafter, you might need to modify these settings.
  1. Log in to the UI as administrator.
  2. In the left navigation bar, click Global Settings.
  3. In Select View, choose one of the following:
    • Global Settings. This displays a list of the parameters with brief descriptions and current values.
    • Hypervisor Capabilities. This displays a list of hypervisor versions with the maximum number of guests supported for each.
  4. Use the search box to narrow down the list to those you are interested in.
  5. Click the Edit icon to modify a value. If you are viewing Hypervisor Capabilities, you must click the name of the hypervisor first to display the editing screen.

6.2. About Global Configuration Parameters

CloudStack provides a variety of settings you can use to set limits, configure features, and enable or disable features in the cloud. Once your Management Server is running, you might need to set some of these global configuration parameters, depending on what optional features you are setting up.
To modify global configuration parameters, use the steps in "Setting Global Configuration Parameters."
The documentation for each CloudStack feature should direct you to the names of the applicable parameters. Many of them are discussed in the CloudStack Administration Guide. The following table shows a few of the more useful parameters.
Field
Value
management.network.cidr
A CIDR that describes the network that the management CIDRs reside on. This variable must be set for deployments that use vSphere. It is recommended to be set for other deployments as well. Example: 192.168.3.0/24.
xen.setup.multipath
For XenServer nodes, this is a true/false variable that instructs CloudStack to enable iSCSI multipath on the XenServer Hosts when they are added. This defaults to false. Set it to true if you would like CloudStack to enable multipath.
If this is true for a NFS-based deployment multipath will still be enabled on the XenServer host. However, this does not impact NFS operation and is harmless.
secstorage.allowed.internal.sites
This is used to protect your internal network from rogue attempts to download arbitrary files using the template download feature. This is a comma-separated list of CIDRs. If a requested URL matches any of these CIDRs the Secondary Storage VM will use the private network interface to fetch the URL. Other URLs will go through the public interface. We suggest you set this to 1 or 2 hardened internal machines where you keep your templates. For example, set it to 192.168.1.66/32.
use.local.storage
Determines whether CloudStack will use storage that is local to the Host for data disks, templates, and snapshots. By default CloudStack will not use this storage. You should change this to true if you want to use local storage and you understand the reliability and feature drawbacks to choosing local storage.
host
This is the IP address of the Management Server. If you are using multiple Management Servers you should enter a load balanced IP address that is reachable via the private network.
default.page.size
Maximum number of items per page that can be returned by a CloudStack API command. The limit applies at the cloud level and can vary from cloud to cloud. You can override this with a lower value on a particular API call by using the page and pagesize API command parameters. For more information, see the Developer's Guide. Default: 500.
ha.tag
The label you want to use throughout the cloud to designate certain hosts as dedicated HA hosts. These hosts will be used only for HA-enabled VMs that are restarting due to the failure of another host. For example, you could set this to ha_host. Specify the ha.tag value as a host tag when you add a new host to the cloud.

Chapter 7. Hypervisor Installation

7.1. KVM Hypervisor Host Installation

7.1.1. System Requirements for KVM Hypervisor Hosts

KVM is included with a variety of Linux-based operating systems. Although you are not required to run these distributions, the following are recommended:
  • CentOS / RHEL: 6.3
  • Ubuntu: 12.04(.1)
The main requirement for KVM hypervisors is the libvirt and Qemu version. No matter what Linux distribution you are using, make sure the following requirements are met:
  • libvirt: 0.9.4 or higher
  • Qemu/KVM: 1.0 or higher
The default bridge in CloudStack is the Linux native bridge implementation (bridge module). CloudStack includes an option to work with OpenVswitch, the requirements are listed below
  • libvirt: 0.9.11 or higher
  • openvswitch: 1.7.1 or higher
In addition, the following hardware requirements apply:
  • Within a single cluster, the hosts must be of the same distribution version.
  • All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags.
  • Must support HVM (Intel-VT or AMD-V enabled)
  • 64-bit x86 CPU (more cores results in better performance)
  • 4 GB of memory
  • At least 1 NIC
  • When you deploy CloudStack, the hypervisor host must not have any VMs already running

7.1.2. KVM Installation Overview

If you want to use the Linux Kernel Virtual Machine (KVM) hypervisor to run guest virtual machines, install KVM on the host(s) in your cloud. The material in this section doesn't duplicate KVM installation docs. It provides the CloudStack-specific steps that are needed to prepare a KVM host to work with CloudStack.

Warning

Before continuing, make sure that you have applied the latest updates to your host.

Warning

It is NOT recommended to run services on this host not controlled by CloudStack.
The procedure for installing a KVM Hypervisor Host is:
  1. Prepare the Operating System
  2. Install and configure libvirt
  3. Configure Security Policies (AppArmor and SELinux)
  4. Install and configure the Agent

7.1.3. Prepare the Operating System

The OS of the Host must be prepared to host the CloudStack Agent and run KVM instances.
  1. Log in to your OS as root.
  2. Check for a fully qualified hostname.
    $ hostname --fqdn
    This should return a fully qualified hostname such as "kvm1.lab.example.org". If it does not, edit /etc/hosts so that it does.
  3. Make sure that the machine can reach the Internet.
    $ ping www.cloudstack.org
  4. Turn on NTP for time synchronization.

    Note

    NTP is required to synchronize the clocks of the servers in your cloud. Unsynchronized clocks can cause unexpected problems.
    1. Install NTP
      $ yum install ntp
      $ apt-get install openntpd
  5. Repeat all of these steps on every hypervisor host.

7.1.4. Install and configure the Agent

To manage KVM instances on the host CloudStack uses a Agent. This Agent communicates with the Management server and controls all the instances on the host.
First we start by installing the agent:
In RHEL or CentOS:
$ yum install cloud-agent
In Ubuntu:
$ apt-get install cloud-agent
The host is now ready to be added to a cluster. This is covered in a later section, see Section 5.6, “Adding a Host”. It is recommended that you continue to read the documentation before adding the host!

7.1.5. Install and Configure libvirt

CloudStack uses libvirt for managing virtual machines. Therefore it is vital that libvirt is configured correctly. Libvirt is a dependency of cloud-agent and should already be installed.
  1. In order to have live migration working libvirt has to listen for unsecured TCP connections. We also need to turn off libvirts attempt to use Multicast DNS advertising. Both of these settings are in /etc/libvirt/libvirtd.conf
    Set the following parameters:
    listen_tls = 0
    listen_tcp = 1
    tcp_port = "16509"
    auth_tcp = "none"
    mdns_adv = 0
  2. Turning on "listen_tcp" in libvirtd.conf is not enough, we have to change the parameters as well:
    On RHEL or CentOS modify /etc/sysconfig/libvirtd:
    Uncomment the following line:
    #LIBVIRTD_ARGS="--listen"
    On Ubuntu: modify /etc/init/libvirt-bin.conf
    Change the following line (at the end of the file):
    exec /usr/sbin/libvirtd -d
    to (just add -l)
    exec /usr/sbin/libvirtd -d -l
  3. Restart libvirt
    In RHEL or CentOS:
    $ service libvirtd restart
    In Ubuntu:
    $ service libvirt-bin restart

7.1.6. Configure the Security Policies

CloudStack does various things which can be blocked by security mechanisms like AppArmor and SELinux. These have to be disabled to ensure the Agent has all the required permissions.
  1. Configure SELinux (RHEL and CentOS)
    1. Check to see whether SELinux is installed on your machine. If not, you can skip this section.
      In RHEL or CentOS, SELinux is installed and enabled by default. You can verify this with:
      $ rpm -qa | grep selinux
    2. Set the SELINUX variable in /etc/selinux/config to "permissive". This ensures that the permissive setting will be maintained after a system reboot.
      In RHEL or CentOS:
      vi /etc/selinux/config
      Change the following line
      SELINUX=enforcing
      to this
      SELINUX=permissive
    3. Then set SELinux to permissive starting immediately, without requiring a system reboot.
      $ setenforce permissive
  2. Configure Apparmor (Ubuntu)
    1. Check to see whether AppArmor is installed on your machine. If not, you can skip this section.
      In Ubuntu AppArmor is installed and enabled by default. You can verify this with:
      $ dpkg --list 'apparmor'
    2. Disable the AppArmor profiles for libvirt
      $ ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
      $ ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
      $ apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
      $ apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper

7.1.7. Configure the network bridges

Warning

This is a very important section, please make sure you read this thoroughly.

Note

This section details how to configure bridges using the native implementation in Linux. Please refer to the next section if you intend to use OpenVswitch
In order to forward traffic to your instances you will need at least two bridges: public and private.
By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor.
The most important factor is that you keep the configuration consistent on all your hypervisors.

7.1.7.1. Network example

There are many ways to configure your network. In the Basic networking mode you should have two (V)LAN's, one for your private network and one for the public network.
We assume that the hypervisor has one NIC (eth0) with three tagged VLAN's:
  1. VLAN 100 for management of the hypervisor
  2. VLAN 200 for public network of the instances (cloudbr0)
  3. VLAN 300 for private network of the instances (cloudbr1)
On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1

Note

The Hypervisor and Management server don't have to be in the same subnet!

7.1.7.2. Configuring the network bridges

It depends on the distribution you are using how to configure these, below you'll find examples for RHEL/CentOS and Ubuntu.

Note

The goal is to have two bridges called 'cloudbr0' and 'cloudbr1' after this section. This should be used as a guideline only. The exact configuration will depend on your network layout.
7.1.7.2.1. Configure in RHEL or CentOS
The required packages were installed when libvirt was installed, we can proceed to configuring the network.
First we configure eth0
vi /etc/sysconfig/network-scripts/ifcfg-eth0
Make sure it looks similar to:
DEVICE=eth0
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
We now have to configure the three VLAN interfaces:
vi /etc/sysconfig/network-scripts/ifcfg-eth0.100
DEVICE=eth0.100
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
IPADDR=192.168.42.11
GATEWAY=192.168.42.1
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth0.200
DEVICE=eth0.200
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr0
vi /etc/sysconfig/network-scripts/ifcfg-eth0.300
DEVICE=eth0.300
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr1
Now we have the VLAN interfaces configured we can add the bridges on top of them.
vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
Now we just configure it is a plain bridge without an IP-Address
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes
We do the same for cloudbr1
vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.

Warning

Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!
7.1.7.2.2. Configure in Ubuntu
All the required packages were installed when you installed libvirt, so we only have to configure the network.
vi /etc/network/interfaces
Modify the interfaces file to look like this:
auto lo
iface lo inet loopback

# The primary network interface
auto eth0.100
iface eth0.100 inet static
    address 192.168.42.11
    netmask 255.255.255.240
    gateway 192.168.42.1
    dns-nameservers 8.8.8.8 8.8.4.4
    dns-domain lab.example.org

# Public network
auto cloudbr0
iface cloudbr0 inet manual
    bridge_ports eth0.200
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# Private network
auto cloudbr1
iface cloudbr1 inet manual
    bridge_ports eth0.300
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.

Warning

Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!

7.1.8. Configure the network using OpenVswitch

Warning

This is a very important section, please make sure you read this thoroughly.
In order to forward traffic to your instances you will need at least two bridges: public and private.
By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor.
The most important factor is that you keep the configuration consistent on all your hypervisors.

7.1.8.1. Preparing

To make sure that the native bridge module will not interfere with openvswitch the bridge module should be added to the blacklist. See the modprobe documentation for your distribution on where to find the blacklist. Make sure the module is not loaded either by rebooting or executing rmmod bridge before executing next steps.
The network configurations below depend on the ifup-ovs and ifdown-ovs scripts which are part of the openvswitch installation. They should be installed in /etc/sysconfig/network-scripts/

7.1.8.2. Network example

There are many ways to configure your network. In the Basic networking mode you should have two (V)LAN's, one for your private network and one for the public network.
We assume that the hypervisor has one NIC (eth0) with three tagged VLAN's:
  1. VLAN 100 for management of the hypervisor
  2. VLAN 200 for public network of the instances (cloudbr0)
  3. VLAN 300 for private network of the instances (cloudbr1)
On VLAN 100 we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1

Note

The Hypervisor and Management server don't have to be in the same subnet!

7.1.8.3. Configuring the network bridges

It depends on the distribution you are using how to configure these, below you'll find examples for RHEL/CentOS.

Note

The goal is to have three bridges called 'mgmt0', 'cloudbr0' and 'cloudbr1' after this section. This should be used as a guideline only. The exact configuration will depend on your network layout.
7.1.8.3.1. Configure OpenVswitch
The network interfaces using OpenVswitch are created using the ovs-vsctl command. This command will configure the interfaces and persist them to the OpenVswitch database.
First we create a main bridge connected to the eth0 interface. Next we create three fake bridges, each connected to a specific vlan tag.
# ovs-vsctl add-br cloudbr
# ovs-vsctl add-port cloudbr eth0 
# ovs-vsctl set port cloudbr trunks=100,200,300
# ovs-vsctl add-br mgmt0 cloudbr 100
# ovs-vsctl add-br cloudbr0 cloudbr 200
# ovs-vsctl add-br cloudbr1 cloudbr 300
7.1.8.3.2. Configure in RHEL or CentOS
The required packages were installed when openvswitch and libvirt were installed, we can proceed to configuring the network.
First we configure eth0
vi /etc/sysconfig/network-scripts/ifcfg-eth0
Make sure it looks similar to:
DEVICE=eth0
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
We have to configure the base bridge with the trunk.
vi /etc/sysconfig/network-scripts/ifcfg-cloudbr
DEVICE=cloudbr
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
DEVICETYPE=ovs
TYPE=OVSBridge
We now have to configure the three VLAN bridges:
vi /etc/sysconfig/network-scripts/ifcfg-mgmt0
DEVICE=mgmt0
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=static
DEVICETYPE=ovs
TYPE=OVSBridge
IPADDR=192.168.42.11
GATEWAY=192.168.42.1
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
DEVICETYPE=ovs
TYPE=OVSBridge
vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=OVSBridge
DEVICETYPE=ovs
With this configuration you should be able to restart the network, although a reboot is recommended to see if everything works properly.

Warning

Make sure you have an alternative way like IPMI or ILO to reach the machine in case you made a configuration error and the network stops functioning!

7.1.9. Configuring the firewall

The hypervisor needs to be able to communicate with other hypervisors and the management server needs to be able to reach the hypervisor.
In order to do so we have to open the following TCP ports (if you are using a firewall):
  1. 22 (SSH)
  2. 1798
  3. 16509 (libvirt)
  4. 5900 - 6100 (VNC consoles)
  5. 49152 - 49216 (libvirt live migration)
It depends on the firewall you are using how to open these ports. Below you'll find examples how to open these ports in RHEL/CentOS and Ubuntu.

7.1.9.1. Open ports in RHEL/CentOS

RHEL and CentOS use iptables for firewalling the system, you can open extra ports by executing the following iptable commands:
$ iptables -I INPUT -p tcp -m tcp --dport 22 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 1798 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 16509 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 5900:6100 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 49152:49216 -j ACCEPT
These iptable settings are not persistent accross reboots, we have to save them first.
$ iptables-save > /etc/sysconfig/iptables

7.1.9.2. Open ports in Ubuntu

The default firewall under Ubuntu is UFW (Uncomplicated FireWall), which is a Python wrapper around iptables.
To open the required ports, execute the following commands:
$ ufw allow proto tcp from any to any port 22
$ ufw allow proto tcp from any to any port 1798
$ ufw allow proto tcp from any to any port 16509
$ ufw allow proto tcp from any to any port 5900:6100
$ ufw allow proto tcp from any to any port 49152:49216

Note

By default UFW is not enabled on Ubuntu. Executing these commands with the firewall disabled does not enable the firewall.

7.1.10. Add the host to CloudStack

The host is now ready to be added to a cluster. This is covered in a later section, see Section 5.6, “Adding a Host”. It is recommended that you continue to read the documentation before adding the host!

7.2. Citrix XenServer Installation for CloudStack

If you want to use the Citrix XenServer hypervisor to run guest virtual machines, install XenServer 6.0 or XenServer 6.0.2 on the host(s) in your cloud. For an initial installation, follow the steps below. If you have previously installed XenServer and want to upgrade to another version, see Section 7.2.11, “Upgrading XenServer Versions”.

7.2.1. System Requirements for XenServer Hosts

  • The host must be certified as compatible with one of the following. See the Citrix Hardware Compatibility Guide: http://hcl.xensource.com
    • XenServer 5.6 SP2
    • XenServer 6.0
    • XenServer 6.0.2
  • You must re-install Citrix XenServer if you are going to re-use a host from a previous install.
  • Must support HVM (Intel-VT or AMD-V enabled)
  • Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.
  • All hosts within a cluster must be homogeneous. The CPUs must be of the same type, count, and feature flags.
  • Must support HVM (Intel-VT or AMD-V enabled in BIOS)
  • 64-bit x86 CPU (more cores results in better performance)
  • Hardware virtualization support required
  • 4 GB of memory
  • 36 GB of local disk
  • At least 1 NIC
  • Statically allocated IP Address
  • When you deploy CloudStack, the hypervisor host must not have any VMs already running

Warning

The lack of up-do-date hotfixes can lead to data corruption and lost VMs.

7.2.2. XenServer Installation Steps

7.2.3. Configure XenServer dom0 Memory

Configure the XenServer dom0 settings to allocate more memory to dom0. This can enable XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for XenServer dom0. For instructions on how to do this, see http://support.citrix.com/article/CTX126531. The article refers to XenServer 5.6, but the same information applies to XenServer 6.0.

7.2.4. Username and Password

All XenServers in a cluster must have the same username and password as configured in CloudStack.

7.2.5. Time Synchronization

The host must be set to use NTP. All hosts in a pod must have the same time.
  1. Install NTP.
    # yum install ntp
  2. Edit the NTP configuration file to point to your NTP server.
    # vi /etc/ntp.conf
    Add one or more server lines in this file with the names of the NTP servers you want to use. For example:
    server 0.xenserver.pool.ntp.org
    server 1.xenserver.pool.ntp.org
    server 2.xenserver.pool.ntp.org
    server 3.xenserver.pool.ntp.org
    
  3. Restart the NTP client.
    # service ntpd restart
  4. Make sure NTP will start again upon reboot.
    # chkconfig ntpd on

7.2.6. Licensing

Citrix XenServer Free version provides 30 days usage without a license. Following the 30 day trial, XenServer requires a free activation and license. You can choose to install a license now or skip this step. If you skip this step, you will need to install a license when you activate and license the XenServer.

7.2.6.1. Getting and Deploying a License

If you choose to install a license now you will need to use the XenCenter to activate and get a license.
  1. In XenCenter, click Tools > License manager.
  2. Select your XenServer and select Activate Free XenServer.
  3. Request a license.
You can install the license with XenCenter or using the xe command line tool.

7.2.7. Install CloudStack XenServer Support Package (CSP)

(Optional)
To enable security groups, elastic load balancing, and elastic IP on XenServer, download and install the CloudStack XenServer Support Package (CSP). After installing XenServer, perform the following additional steps on each XenServer host.
  1. Download the CSP software onto the XenServer host from one of the following links:
    For XenServer 6.0.2:
    For XenServer 5.6 SP2:
    For XenServer 6.0:
  2. Extract the file:
    # tar xf xenserver-cloud-supp.tgz
  3. Run the following script:
    # xe-install-supplemental-pack xenserver-cloud-supp.iso
  4. If the XenServer host is part of a zone that uses basic networking, disable Open vSwitch (OVS):
    # xe-switch-network-backend  bridge
    Restart the host machine when prompted.
The XenServer host is now ready to be added to CloudStack.

7.2.8. Primary Storage Setup for XenServer

CloudStack natively supports NFS, iSCSI and local storage. If you are using one of these storage types, there is no need to create the XenServer Storage Repository ("SR").
If, however, you would like to use storage connected via some other technology, such as FiberChannel, you must set up the SR yourself. To do so, perform the following steps. If you have your hosts in a XenServer pool, perform the steps on the master node. If you are working with a single XenServer which is not part of a cluster, perform the steps on that XenServer.
  1. Connect FiberChannel cable to all hosts in the cluster and to the FiberChannel storage host.
  2. Rescan the SCSI bus. Either use the following command or use XenCenter to perform an HBA rescan.
    # scsi-rescan
  3. Repeat step 2 on every host.
  4. Check to be sure you see the new SCSI disk.
    # ls /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -l
    The output should look like this, although the specific file name will be different (scsi-<scsiID>):
    lrwxrwxrwx 1 root root 9 Mar 16 13:47
    /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -> ../../sdc
    
  5. Repeat step 4 on every host.
  6. On the storage server, run this command to get a unique ID for the new SR.
    # uuidgen
    The output should look like this, although the specific ID will be different:
    e6849e96-86c3-4f2c-8fcc-350cc711be3d
  7. Create the FiberChannel SR. In name-label, use the unique ID you just generated.
    # xe sr-create type=lvmohba shared=true
    device-config:SCSIid=360a98000503365344e6f6177615a516b
    name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
    
    This command returns a unique ID for the SR, like the following example (your ID will be different):
    7a143820-e893-6c6a-236e-472da6ee66bf
  8. To create a human-readable description for the SR, use the following command. In uuid, use the SR ID returned by the previous command. In name-description, set whatever friendly text you prefer.
    # xe sr-param-set uuid=7a143820-e893-6c6a-236e-472da6ee66bf name-description="Fiber Channel storage repository"
    Make note of the values you will need when you add this storage to CloudStack later (see Section 5.7, “Add Primary Storage”). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will enter the name-label you set earlier (in this example, e6849e96-86c3-4f2c-8fcc-350cc711be3d).
  9. (Optional) If you want to enable multipath I/O on a FiberChannel SAN, refer to the documentation provided by the SAN vendor.

7.2.9. iSCSI Multipath Setup for XenServer (Optional)

When setting up the storage repository on a Citrix XenServer, you can enable multipath I/O, which uses redundant physical components to provide greater reliability in the connection between the server and the SAN. To enable multipathing, use a SAN solution that is supported for Citrix servers and follow the procedures in Citrix documentation. The following links provide a starting point:
You can also ask your SAN vendor for advice about setting up your Citrix repository for multipathing.
Make note of the values you will need when you add this storage to the CloudStack later (see Section 5.7, “Add Primary Storage”). In the Add Primary Storage dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you will enter the same name used to create the SR.
If you encounter difficulty, address the support team for the SAN provided by your vendor. If they are not able to solve your issue, see Contacting Support.

7.2.10. Physical Networking Setup for XenServer

Once XenServer has been installed, you may need to do some additional network configuration. At this point in the installation, you should have a plan for what NICs the host will have and what traffic each NIC will carry. The NICs should be cabled as necessary to implement your plan.
If you plan on using NIC bonding, the NICs on all hosts in the cluster must be cabled exactly the same. For example, if eth0 is in the private bond on one host in a cluster, then eth0 must be in the private bond on all hosts in the cluster.
The IP address assigned for the management network interface must be static. It can be set on the host itself or obtained via static DHCP.
CloudStack configures network traffic of various types to use different NICs or bonds on the XenServer host. You can control this process and provide input to the Management Server through the use of XenServer network name labels. The name labels are placed on physical interfaces or bonds and configured in CloudStack. In some simple cases the name labels are not required.

7.2.10.1. Configuring Public Network with a Dedicated NIC for XenServer (Optional)

CloudStack supports the use of a second NIC (or bonded pair of NICs, described in Section 7.2.10.4, “NIC Bonding for XenServer (Optional)”) for the public network. If bonding is not used, the public network can be on any NIC and can be on different NICs on the hosts in a cluster. For example, the public network can be on eth0 on node A and eth1 on node B. However, the XenServer name-label for the public network must be identical across all hosts. The following examples set the network label to "cloud-public". After the management server is installed and running you must configure it with the name of the chosen network label (e.g. "cloud-public"); this is discussed in Section 3.5, “Management Server Installation”.
If you are using two NICs bonded together to create a public network, see Section 7.2.10.4, “NIC Bonding for XenServer (Optional)”.
If you are using a single dedicated NIC to provide public network access, follow this procedure on each new host that is added to CloudStack before adding the host.
  1. Run xe network-list and find the public network. This is usually attached to the NIC that is public. Once you find the network make note of its UUID. Call this <UUID-Public>.
  2. Run the following command.
    # xe network-param-set name-label=cloud-public uuid=<UUID-Public>

7.2.10.2. Configuring Multiple Guest Networks for XenServer (Optional)

CloudStack supports the use of multiple guest networks with the XenServer hypervisor. Each network is assigned a name-label in XenServer. For example, you might have two networks with the labels "cloud-guest" and "cloud-guest2". After the management server is installed and running, you must add the networks and use these labels so that CloudStack is aware of the networks.
Follow this procedure on each new host before adding the host to CloudStack:
  1. Run xe network-list and find one of the guest networks. Once you find the network make note of its UUID. Call this <UUID-Guest>.
  2. Run the following command, substituting your own name-label and uuid values.
    # xe network-param-set name-label=<cloud-guestN> uuid=<UUID-Guest>
  3. Repeat these steps for each additional guest network, using a different name-label and uuid each time.

7.2.10.3. Separate Storage Network for XenServer (Optional)

You can optionally set up a separate storage network. This should be done first on the host, before implementing the bonding steps below. This can be done using one or two available NICs. With two NICs bonding may be done as above. It is the administrator's responsibility to set up a separate storage network.
Give the storage network a different name-label than what will be given for other networks.
For the separate storage network to work correctly, it must be the only interface that can ping the primary storage device's IP address. For example, if eth0 is the management network NIC, ping -I eth0 <primary storage device IP> must fail. In all deployments, secondary storage devices must be pingable from the management network NIC or bond. If a secondary storage device has been placed on the storage network, it must also be pingable via the storage network NIC or bond on the hosts as well.
You can set up two separate storage networks as well. For example, if you intend to implement iSCSI multipath, dedicate two non-bonded NICs to multipath. Each of the two networks needs a unique name-label.
If no bonding is done, the administrator must set up and name-label the separate storage network on all hosts (masters and slaves).
Here is an example to set up eth5 to access a storage network on 172.16.0.0/24.
# xe pif-list host-name-label='hostname' device=eth5
uuid(RO): ab0d3dd4-5744-8fae-9693-a022c7a3471d
device ( RO): eth5
#xe pif-reconfigure-ip DNS=172.16.3.3 gateway=172.16.0.1 IP=172.16.0.55 mode=static netmask=255.255.255.0 uuid=ab0d3dd4-5744-8fae-9693-a022c7a3471d

7.2.10.4. NIC Bonding for XenServer (Optional)

XenServer supports Source Level Balancing (SLB) NIC bonding. Two NICs can be bonded together to carry public, private, and guest traffic, or some combination of these. Separate storage networks are also possible. Here are some example supported configurations:
  • 2 NICs on private, 2 NICs on public, 2 NICs on storage
  • 2 NICs on private, 1 NIC on public, storage uses management network
  • 2 NICs on private, 2 NICs on public, storage uses management network
  • 1 NIC for private, public, and storage
All NIC bonding is optional.
XenServer expects all nodes in a cluster will have the same network cabling and same bonds implemented. In an installation the master will be the first host that was added to the cluster and the slave hosts will be all subsequent hosts added to the cluster. The bonds present on the master set the expectation for hosts added to the cluster later. The procedure to set up bonds on the master and slaves are different, and are described below. There are several important implications of this:
  • You must set bonds on the first host added to a cluster. Then you must use xe commands as below to establish the same bonds in the second and subsequent hosts added to a cluster.
  • Slave hosts in a cluster must be cabled exactly the same as the master. For example, if eth0 is in the private bond on the master, it must be in the management network for added slave hosts.
7.2.10.4.1. Management Network Bonding
The administrator must bond the management network NICs prior to adding the host to CloudStack.
7.2.10.4.2. Creating a Private Bond on the First Host in the Cluster
Use the following steps to create a bond in XenServer. These steps should be run on only the first host in a cluster. This example creates the cloud-private network with two physical NICs (eth0 and eth1) bonded into it.
  1. Find the physical NICs that you want to bond together.
    # xe pif-list host-name-label='hostname' device=eth0
    # xe pif-list host-name-label='hostname' device=eth1
    These command shows the eth0 and eth1 NICs and their UUIDs. Substitute the ethX devices of your choice. Call the UUID's returned by the above command slave1-UUID and slave2-UUID.
  2. Create a new network for the bond. For example, a new network with name "cloud-private".
    This label is important. CloudStack looks for a network by a name you configure. You must use the same name-label for all hosts in the cloud for the management network.
    # xe network-create name-label=cloud-private
    # xe bond-create network-uuid=[uuid of cloud-private created above]
    pif-uuids=[slave1-uuid],[slave2-uuid]
Now you have a bonded pair that can be recognized by CloudStack as the management network.
7.2.10.4.3. Public Network Bonding
Bonding can be implemented on a separate, public network. The administrator is responsible for creating a bond for the public network if that network will be bonded and will be separate from the management network.
7.2.10.4.4. Creating a Public Bond on the First Host in the Cluster
These steps should be run on only the first host in a cluster. This example creates the cloud-public network with two physical NICs (eth2 and eth3) bonded into it.
  1. Find the physical NICs that you want to bond together.
    #xe pif-list host-name-label='hostname' device=eth2
    # xe pif-list host-name-label='hostname' device=eth3
    These command shows the eth2 and eth3 NICs and their UUIDs. Substitute the ethX devices of your choice. Call the UUID's returned by the above command slave1-UUID and slave2-UUID.
  2. Create a new network for the bond. For example, a new network with name "cloud-public".
    This label is important. CloudStack looks for a network by a name you configure. You must use the same name-label for all hosts in the cloud for the public network.
    # xe network-create name-label=cloud-public
    # xe bond-create network-uuid=[uuid of cloud-public created above]
    pif-uuids=[slave1-uuid],[slave2-uuid]
Now you have a bonded pair that can be recognized by CloudStack as the public network.
7.2.10.4.5. Adding More Hosts to the Cluster
With the bonds (if any) established on the master, you should add additional, slave hosts. Run the following command for all additional hosts to be added to the cluster. This will cause the host to join the master in a single XenServer pool.
# xe pool-join master-address=[master IP] master-username=root
master-password=[your password]
7.2.10.4.6. Complete the Bonding Setup Across the Cluster
With all hosts added to the pool, run the cloud-setup-bond script. This script will complete the configuration and set up of the bonds across all hosts in the cluster.
  1. Copy the script from the Management Server in /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh to the master host and ensure it is executable.
  2. Run the script:
    # ./cloud-setup-bonding.sh
Now the bonds are set up and configured properly across the cluster.

7.2.11. Upgrading XenServer Versions

This section tells how to upgrade XenServer software on CloudStack hosts. The actual upgrade is described in XenServer documentation, but there are some additional steps you must perform before and after the upgrade.

Note

Be sure the hardware is certified compatible with the new version of XenServer.
To upgrade XenServer:
  1. Upgrade the database. On the Management Server node:
    1. Back up the database:
      # mysqldump --user=root --databases cloud > cloud.backup.sql
      # mysqldump --user=root --databases cloud_usage > cloud_usage.backup.sql
    2. You might need to change the OS type settings for VMs running on the upgraded hosts.
      • If you upgraded from XenServer 5.6 GA to XenServer 5.6 SP2, change any VMs that have the OS type CentOS 5.5 (32-bit), Oracle Enterprise Linux 5.5 (32-bit), or Red Hat Enterprise Linux 5.5 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).
      • If you upgraded from XenServer 5.6 SP2 to XenServer 6.0.2, change any VMs that have the OS type CentOS 5.6 (32-bit), CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.6 (32-bit), Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux 5.6 (32-bit) , or Red Hat Enterprise Linux 5.7 (32-bit) to Other Linux (32-bit). Change any VMs that have the 64-bit versions of these same OS types to Other Linux (64-bit).
      • If you upgraded from XenServer 5.6 to XenServer 6.0.2, do all of the above.
    3. Restart the Management Server and Usage Server. You only need to do this once for all clusters.
      # service cloud-management start
      # service cloud-usage start
  2. Disconnect the XenServer cluster from CloudStack.
    1. Log in to the CloudStack UI as root.
    2. Navigate to the XenServer cluster, and click Actions – Unmanage.
    3. Watch the cluster status until it shows Unmanaged.
  3. Log in to one of the hosts in the cluster, and run this command to clean up the VLAN:
    # . /opt/xensource/bin/cloud-clean-vlan.sh
  4. Still logged in to the host, run the upgrade preparation script:
    # /opt/xensource/bin/cloud-prepare-upgrade.sh
    Troubleshooting: If you see the error "can't eject CD," log in to the VM and umount the CD, then run the script again.
  5. Upgrade the XenServer software on all hosts in the cluster. Upgrade the master first.
    1. Live migrate all VMs on this host to other hosts. See the instructions for live migration in the Administrator's Guide.
      Troubleshooting: You might see the following error when you migrate a VM:
      [root@xenserver-qa-2-49-4 ~]# xe vm-migrate live=true host=xenserver-qa-2-49-5 vm=i-2-8-VM
      You attempted an operation on a VM which requires PV drivers to be installed but the drivers were not detected.
      vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
      To solve this issue, run the following:
      # /opt/xensource/bin/make_migratable.sh  b6cf79c8-02ee-050b-922f-49583d9f1a14
    2. Reboot the host.
    3. Upgrade to the newer version of XenServer. Use the steps in XenServer documentation.
    4. After the upgrade is complete, copy the following files from the management server to this host, in the directory locations shown below:
      Copy this Management Server file...
      ...to this location on the XenServer host
      /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py
      /opt/xensource/sm/NFSSR.py
      /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/setupxenserver.sh
      /opt/xensource/bin/setupxenserver.sh
      /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/make_migratable.sh
      /opt/xensource/bin/make_migratable.sh
      /usr/lib64/cloud/common/scripts/vm/hypervisor/xenserver/cloud-clean-vlan.sh
      /opt/xensource/bin/cloud-clean-vlan.sh
    5. Run the following script:
      # /opt/xensource/bin/setupxenserver.sh
      Troubleshooting: If you see the following error message, you can safely ignore it.
      mv: cannot stat `/etc/cron.daily/logrotate': No such file or directory
    6. Plug in the storage repositories (physical block devices) to the XenServer host:
      # for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd ; done
      Note: If you add a host to this XenServer pool, you need to migrate all VMs on this host to other hosts, and eject this host from XenServer pool.
  6. Repeat these steps to upgrade every host in the cluster to the same version of XenServer.
  7. Run the following command on one host in the XenServer cluster to clean up the host tags:
    # for host in $(xe host-list | grep ^uuid | awk '{print $NF}') ; do xe host-param-clear uuid=$host param-name=tags; done;

    Note

    When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
  8. Reconnect the XenServer cluster to CloudStack.
    1. Log in to the CloudStack UI as root.
    2. Navigate to the XenServer cluster, and click Actions – Manage.
    3. Watch the status to see that all the hosts come up.
  9. After all hosts are up, run the following on one host in the cluster:
    # /opt/xensource/bin/cloud-clean-vlan.sh

7.3. VMware vSphere Installation and Configuration

If you want to use the VMware vSphere hypervisor to run guest virtual machines, install vSphere on the host(s) in your cloud.

7.3.1. System Requirements for vSphere Hosts

7.3.1.1. Software requirements:

  • vSphere and vCenter, both version 4.1 or 5.0.
    vSphere Standard is recommended. Note however that customers need to consider the CPU constraints in place with vSphere licensing. See http://www.vmware.com/files/pdf/vsphere_pricing.pdf and discuss with your VMware sales representative.
    vCenter Server Standard is recommended.
  • Be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor's support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.

Apply All Necessary Hotfixes

The lack of up-do-date hotfixes can lead to data corruption and lost VMs.

7.3.1.2. Hardware requirements:

  • The host must be certified as compatible with vSphere. See the VMware Hardware Compatibility Guide at http://www.vmware.com/resources/compatibility/search.php.
  • All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled).
  • All hosts within a cluster must be homogenous. That means the CPUs must be of the same type, count, and feature flags.
  • 64-bit x86 CPU (more cores results in better performance)
  • Hardware virtualization support required
  • 4 GB of memory
  • 36 GB of local disk
  • At least 1 NIC
  • Statically allocated IP Address

7.3.1.3. vCenter Server requirements:

  • Processor - 2 CPUs 2.0GHz or higher Intel or AMD x86 processors. Processor requirements may be higher if the database runs on the same machine.
  • Memory - 3GB RAM. RAM requirements may be higher if your database runs on the same machine.
  • Disk storage - 2GB. Disk requirements may be higher if your database runs on the same machine.
  • Microsoft SQL Server 2005 Express disk requirements. The bundled database requires up to 2GB free disk space to decompress the installation archive.
  • Networking - 1Gbit or 10Gbit.
For more information, see "vCenter Server and the vSphere Client Hardware Requirements" at http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=install/c_vc_hw.html.

7.3.1.4. Other requirements:

  • VMware vCenter Standard Edition 4.1 or 5.0 must be installed and available to manage the vSphere hosts.
  • vCenter must be configured to use the standard port 443 so that it can communicate with the CloudStack Management Server.
  • You must re-install VMware ESXi if you are going to re-use a host from a previous install.
  • CloudStack requires VMware vSphere 4.1 or 5.0. VMware vSphere 4.0 is not supported.
  • All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V enabled). All hosts within a cluster must be homogeneous. That means the CPUs must be of the same type, count, and feature flags.
  • The CloudStack management network must not be configured as a separate virtual network. The CloudStack management network is the same as the vCenter management network, and will inherit its configuration. See Section 7.3.5.2, “Configure vCenter Management Network”.
  • CloudStack requires ESXi. ESX is not supported.
  • All resources used for CloudStack must be used for CloudStack only. CloudStack cannot share instance of ESXi or storage with other management consoles. Do not share the same storage volumes that will be used by CloudStack with a different set of ESXi servers that are not managed by CloudStack.
  • Put all target ESXi hypervisors in a cluster in a separate Datacenter in vCenter.
  • The cluster that will be managed by CloudStack should not contain any VMs. Do not run the management server, vCenter or any other VMs on the cluster that is designated for CloudStack use. Create a separate cluster for use of CloudStack and make sure that they are no VMs in this cluster.
  • All the required VLANS must be trunked into all network switches that are connected to the ESXi hypervisor hosts. These would include the VLANS for Management, Storage, vMotion, and guest VLANs. The guest VLAN (used in Advanced Networking; see Network Setup) is a contiguous range of VLANs that will be managed by CloudStack.

7.3.2. Preparation Checklist for VMware

For a smoother installation, gather the following information before you start:

7.3.2.1. vCenter Checklist

You will need the following information about vCenter.
vCenter Requirement
Value
Notes
vCenter User
This user must have admin privileges.
vCenter User Password
Password for the above user.
vCenter Datacenter Name
Name of the datacenter.
vCenter Cluster Name
Name of the cluster.

7.3.2.2. Networking Checklist for VMware

You will need the following information about VLAN.
VLAN Information
Value
Notes
ESXi VLAN
VLAN on which all your ESXi hypervisors reside.
ESXI VLAN IP Address
IP Address Range in the ESXi VLAN. One address per Virtual Router is used from this range.
ESXi VLAN IP Gateway
ESXi VLAN Netmask
Management Server VLAN
VLAN on which the CloudStack Management server is installed.
Public VLAN
VLAN for the Public Network.
Public VLAN Gateway
Public VLAN Netmask
Public VLAN IP Address Range
Range of Public IP Addresses available for CloudStack use. These addresses will be used for virtual router on CloudStack to route private traffic to external networks.
VLAN Range for Customer use
A contiguous range of non-routable VLANs. One VLAN will be assigned for each customer.

7.3.3. vSphere Installation Steps

  1. If you haven't already, you'll need to download and purchase vSphere from the VMware Website (https://www.vmware.com/tryvmware/index.php?p=vmware-vsphere&lp=1) and install it by following the VMware vSphere Installation Guide.
  2. Following installation, perform the following configuration, which are described in the next few sections:
    Required
    Optional
    ESXi host setup
    NIC bonding
    Configure host physical networking, virtual switch, vCenter Management Network, and extended port range
    Multipath storage
    Prepare storage for iSCSI
    Configure clusters in vCenter and add hosts to them, or add hosts without clusters to vCenter

7.3.4. ESXi Host setup

All ESXi hosts should enable CPU hardware virtualization support in BIOS. Please note hardware virtualization support is not enabled by default on most servers.

7.3.5. Physical Host Networking

You should have a plan for cabling the vSphere hosts. Proper network configuration is required before adding a vSphere host to CloudStack. To configure an ESXi host, you can use vClient to add it as standalone host to vCenter first. Once you see the host appearing in the vCenter inventory tree, click the host node in the inventory tree, and navigate to the Configuration tab.
vsphereclient.png: vSphere client
In the host configuration tab, click the "Hardware/Networking" link to bring up the networking configuration page as above.

7.3.5.1. Configure Virtual Switch

A default virtual switch vSwitch0 is created. CloudStack requires all ESXi hosts in the cloud to use the same set of virtual switch names. If you change the default virtual switch name, you will need to configure one or more CloudStack configuration variables as well.
7.3.5.1.1. Separating Traffic
CloudStack allows you to use vCenter to configure three separate networks per ESXi host. These networks are identified by the name of the vSwitch they are connected to. The allowed networks for configuration are public (for traffic to/from the public internet), guest (for guest-guest traffic), and private (for management and usually storage traffic). You can use the default virtual switch for all three, or create one or two other vSwitches for those traffic types.
If you want to separate traffic in this way you should first create and configure vSwitches in vCenter according to the vCenter instructions. Take note of the vSwitch names you have used for each traffic type. You will configure CloudStack to use these vSwitches.
7.3.5.1.2. Increasing Ports
By default a virtual switch on ESXi hosts is created with 56 ports. We recommend setting it to 4088, the maximum number of ports allowed. To do that, click the "Properties..." link for virtual switch (note this is not the Properties link for Networking).
vsphereclient.png: vSphere client
In vSwitch properties dialog, select the vSwitch and click Edit. You should see the following dialog:
vsphereclient.png: vSphere client
In this dialog, you can change the number of switch ports. After you've done that, ESXi hosts are required to reboot in order for the setting to take effect.

7.3.5.2. Configure vCenter Management Network

In the vSwitch properties dialog box, you may see a vCenter management network. This same network will also be used as the CloudStack management network. CloudStack requires the vCenter management network to be configured properly. Select the management network item in the dialog, then click Edit.
vsphereclient.png: vSphere client
Make sure the following values are set:
  • VLAN ID set to the desired ID
  • vMotion enabled.
  • Management traffic enabled.
If the ESXi hosts have multiple VMKernel ports, and ESXi is not using the default value "Management Network" as the management network name, you must follow these guidelines to configure the management network port group so that CloudStack can find it:
  • Use one label for the management network port across all ESXi hosts.
  • In the CloudStack UI, go to Configuration - Global Settings and set vmware.management.portgroup to the management network label from the ESXi hosts.

7.3.5.3. Extend Port Range for CloudStack Console Proxy

(Applies only to VMware vSphere version 4.x)
You need to extend the range of firewall ports that the console proxy works with on the hosts. This is to enable the console proxy to work with VMware-based VMs. The default additional port range is 59000-60000. To extend the port range, log in to the VMware ESX service console on each host and run the following commands:
esxcfg-firewall -o 59000-60000,tcp,in,vncextras
esxcfg-firewall -o 59000-60000,tcp,out,vncextras

7.3.5.4. Configure NIC Bonding for vSphere

NIC bonding on vSphere hosts may be done according to the vSphere installation guide.

7.3.6. Storage Preparation for vSphere (iSCSI only)

Use of iSCSI requires preparatory work in vCenter. You must add an iSCSI target and create an iSCSI datastore.
If you are using NFS, skip this section.

7.3.6.1. Enable iSCSI initiator for ESXi hosts

  1. In vCenter, go to hosts and Clusters/Configuration, and click Storage Adapters link. You will see:
    vsphereclient.png: vSphere client
  2. Select iSCSI software adapter and click Properties.
    vsphereclient.png: vSphere client
  3. Click the Configure... button.
    vsphereclient.png: vSphere client
  4. Check Enabled to enable the initiator.
  5. Click OK to save.

7.3.6.2. Add iSCSI target

Under the properties dialog, add the iSCSI target info:
vsphereclient.png: vSphere client
Repeat these steps for all ESXi hosts in the cluster.

7.3.6.3. Create an iSCSI datastore

You should now create a VMFS datastore. Follow these steps to do so:
  1. Select Home/Inventory/Datastores.
  2. Right click on the datacenter node.
  3. Choose Add Datastore... command.
  4. Follow the wizard to create a iSCSI datastore.
This procedure should be done on one host in the cluster. It is not necessary to do this on all hosts.
vsphereclient.png: vSphere client

7.3.6.4. Multipathing for vSphere (Optional)

Storage multipathing on vSphere nodes may be done according to the vSphere installation guide.

7.3.7. Add Hosts or Configure Clusters (vSphere)

Use vCenter to create a vCenter cluster and add your desired hosts to the cluster. You will later add the entire cluster to CloudStack. (see Section 5.5.2, “Add Cluster: vSphere”).

7.3.8. Applying Hotfixes to a VMware vSphere Host

  1. Disconnect the VMware vSphere cluster from CloudStack. It should remain disconnected long enough to apply the hotfix on the host.
    1. Log in to the CloudStack UI as root.
    2. Navigate to the VMware cluster, click Actions, and select Unmanage.
    3. Watch the cluster status until it shows Unmanaged.
  2. Perform the following on each of the ESXi hosts in the cluster:
    1. Move each of the ESXi hosts in the cluster to maintenance mode.
    2. Ensure that all the VMs are migrated to other hosts in that cluster.
    3. If there is only one host in that cluster, shutdown all the VMs and move the host into maintenance mode.
    4. Apply the patch on the ESXi host.
    5. Restart the host if prompted.
    6. Cancel the maintenance mode on the host.
  3. Reconnect the cluster to CloudStack:
    1. Log in to the CloudStack UI as root.
    2. Navigate to the VMware cluster, click Actions, and select Manage.
    3. Watch the status to see that all the hosts come up. It might take several minutes for the hosts to come up.
      Alternatively, verify the host state is properly synchronized and updated in the CloudStack database.

Chapter 8. Additional Installation Options

The next few sections describe CloudStack features above and beyond the basic deployment options.

8.1. Installing the Usage Server (Optional)

You can optionally install the Usage Server once the Management Server is configured properly. The Usage Server takes data from the events in the system and enables usage-based billing for accounts.
When multiple Management Servers are present, the Usage Server may be installed on any number of them. The Usage Servers will coordinate usage processing. A site that is concerned about availability should install Usage Servers on at least two Management Servers.

8.1.1. Requirements for Installing the Usage Server

  • The Management Server must be running when the Usage Server is installed.
  • The Usage Server must be installed on the same server as a Management Server.

8.1.2. Steps to Install the Usage Server

  1. Run ./install.sh.
    # ./install.sh
    
    You should see a few messages as the installer prepares, followed by a list of choices.
  2. Choose "S" to install the Usage Server.
       > S
    
  3. Once installed, start the Usage Server with the following command.
    # service cloud-usage start
    
The Administration Guide discusses further configuration of the Usage Server.

8.2. SSL (Optional)

CloudStack provides HTTP access in its default installation. There are a number of technologies and sites which choose to implement SSL. As a result, we have left CloudStack to expose HTTP under the assumption that a site will implement its typical practice.
CloudStack uses Tomcat as its servlet container. For sites that would like CloudStack to terminate the SSL session, Tomcat’s SSL access may be enabled. Tomcat SSL configuration is described at http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html.

8.3. Database Replication (Optional)

CloudStack supports database replication from one MySQL node to another. This is achieved using standard MySQL replication. You may want to do this as insurance against MySQL server or storage loss. MySQL replication is implemented using a master/slave model. The master is the node that the Management Servers are configured to use. The slave is a standby node that receives all write operations from the master and applies them to a local, redundant copy of the database. The following steps are a guide to implementing MySQL replication.

Note

Creating a replica is not a backup solution. You should develop a backup procedure for the MySQL data that is distinct from replication.
  1. Ensure that this is a fresh install with no data in the master.
  2. Edit my.cnf on the master and add the following in the [mysqld] section below datadir.
    log_bin=mysql-bin
    server_id=1
    
    The server_id must be unique with respect to other servers. The recommended way to achieve this is to give the master an ID of 1 and each slave a sequential number greater than 1, so that the servers are numbered 1, 2, 3, etc.
  3. Restart the MySQL service:
    # service mysqld restart
    
  4. Create a replication account on the master and give it privileges. We will use the "cloud-repl" user with the password "password". This assumes that master and slave run on the 172.16.1.0/24 network.
    # mysql -u root
    mysql> create user 'cloud-repl'@'172.16.1.%' identified by 'password';
    mysql> grant replication slave on *.* TO 'cloud-repl'@'172.16.1.%';
    mysql> flush privileges;
    mysql> flush tables with read lock;
    
  5. Leave the current MySQL session running.
  6. In a new shell start a second MySQL session.
  7. Retrieve the current position of the database.
    # mysql -u root
    mysql> show master status;
    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | mysql-bin.000001 |      412 |              |                  |
    +------------------+----------+--------------+------------------+
    
  8. Note the file and the position that are returned by your instance.
  9. Exit from this session.
  10. Complete the master setup. Returning to your first session on the master, release the locks and exit MySQL.
    mysql> unlock tables;
    
  11. Install and configure the slave. On the slave server, run the following commands.
    # yum install mysql-server
    # chkconfig mysqld on
    
  12. Edit my.cnf and add the following lines in the [mysqld] section below datadir.
    server_id=2
    innodb_rollback_on_timeout=1
    innodb_lock_wait_timeout=600
    
  13. Restart MySQL.
    # service mysqld restart
    
  14. Instruct the slave to connect to and replicate from the master. Replace the IP address, password, log file, and position with the values you have used in the previous steps.
    mysql> change master to
        -> master_host='172.16.1.217',
        -> master_user='cloud-repl',
        -> master_password='password',
        -> master_log_file='mysql-bin.000001',
        -> master_log_pos=412;
    
  15. Then start replication on the slave.
    mysql> start slave;
    
  16. Optionally, open port 3306 on the slave as was done on the master earlier.
    This is not required for replication to work. But if you choose not to do this, you will need to do it when failover to the replica occurs.

8.3.1. Failover

This will provide for a replicated database that can be used to implement manual failover for the Management Servers. CloudStack failover from one MySQL instance to another is performed by the administrator. In the event of a database failure you should:
  1. Stop the Management Servers (via service cloud-management stop).
  2. Change the replica's configuration to be a master and restart it.
  3. Ensure that the replica's port 3306 is open to the Management Servers.
  4. Make a change so that the Management Server uses the new database. The simplest process here is to put the IP address of the new database server into each Management Server's /etc/cloud/management/db.properties.
  5. Restart the Management Servers:
    # service cloud-management start
    

Chapter 9. Choosing a Deployment Architecture

The architecture used in a deployment will vary depending on the size and purpose of the deployment. This section contains examples of deployment architecture, including a small-scale deployment useful for test and trial deployments and a fully-redundant large-scale setup for production deployments.

9.1. Small-Scale Deployment

Small-Scale Deployment
This diagram illustrates the network architecture of a small-scale CloudStack deployment.
  • A firewall provides a connection to the Internet. The firewall is configured in NAT mode. The firewall forwards HTTP requests and API calls from the Internet to the Management Server. The Management Server resides on the management network.
  • A layer-2 switch connects all physical servers and storage.
  • A single NFS server functions as both the primary and secondary storage.
  • The Management Server is connected to the management network.

9.2. Large-Scale Redundant Setup

Large-Scale Redundant Setup
This diagram illustrates the network architecture of a large-scale CloudStack deployment.
  • A layer-3 switching layer is at the core of the data center. A router redundancy protocol like VRRP should be deployed. Typically high-end core switches also include firewall modules. Separate firewall appliances may also be used if the layer-3 switch does not have integrated firewall capabilities. The firewalls are configured in NAT mode. The firewalls provide the following functions:
    • Forwards HTTP requests and API calls from the Internet to the Management Server. The Management Server resides on the management network.
    • When the cloud spans multiple zones, the firewalls should enable site-to-site VPN such that servers in different zones can directly reach each other.
  • A layer-2 access switch layer is established for each pod. Multiple switches can be stacked to increase port count. In either case, redundant pairs of layer-2 switches should be deployed.
  • The Management Server cluster (including front-end load balancers, Management Server nodes, and the MySQL database) is connected to the management network through a pair of load balancers.
  • Secondary storage servers are connected to the management network.
  • Each pod contains storage and computing servers. Each storage and computing server should have redundant NICs connected to separate layer-2 access switches.

9.3. Separate Storage Network

In the large-scale redundant setup described in the previous section, storage traffic can overload the management network. A separate storage network is optional for deployments. Storage protocols such as iSCSI are sensitive to network delays. A separate storage network ensures guest network traffic contention does not impact storage performance.

9.4. Multi-Node Management Server

The CloudStack Management Server is deployed on one or more front-end servers connected to a single MySQL database. Optionally a pair of hardware load balancers distributes requests from the web. A backup management server set may be deployed using MySQL replication at a remote site to add DR capabilities.
Multi-Node Management Server
The administrator must decide the following.
  • Whether or not load balancers will be used.
  • How many Management Servers will be deployed.
  • Whether MySQL replication will be deployed to enable disaster recovery.

9.5. Multi-Site Deployment

The CloudStack platform scales well into multiple sites through the use of zones. The following diagram shows an example of a multi-site deployment.
Example Of A Multi-Site Deployment
Data Center 1 houses the primary Management Server as well as zone 1. The MySQL database is replicated in real time to the secondary Management Server installation in Data Center 2.
Separate Storage Network
This diagram illustrates a setup with a separate storage network. Each server has four NICs, two connected to pod-level network switches and two connected to storage network switches.
There are two ways to configure the storage network:
  • Bonded NIC and redundant switches can be deployed for NFS. In NFS deployments, redundant switches and bonded NICs still result in one network (one CIDR block+ default gateway address).
  • iSCSI can take advantage of two separate storage networks (two CIDR blocks each with its own default gateway). Multipath iSCSI client can failover and load balance between separate storage networks.
NIC Bonding And Multipath I/O
This diagram illustrates the differences between NIC bonding and Multipath I/O (MPIO). NIC bonding configuration involves only one network. MPIO involves two separate networks.

Chapter 10. Accounts

10.1. Accounts, Users, and Domains

Accounts
An account typically represents a customer of the service provider or a department in a large organization. Multiple users can exist in an account.
Domains
Accounts are grouped by domains. Domains usually contain multiple accounts that have some logical relationship to each other and a set of delegated administrators with some authority over the domain and its subdomains. For example, a service provider with several resellers could create a domain for each reseller.
For each account created, the Cloud installation creates three different types of user accounts: root administrator, domain administrator, and user.
Users
Users are like aliases in the account. Users in the same account are not isolated from each other, but they are isolated from users in other accounts. Most installations need not surface the notion of users; they just have one user per account. The same user cannot belong to multiple accounts.
Username is unique in a domain across accounts in that domain. The same username can exist in other domains, including sub-domains. Domain name can repeat only if the full pathname from root is unique. For example, you can create root/d1, as well as root/foo/d1, and root/sales/d1.
Administrators are accounts with special privileges in the system. There may be multiple administrators in the system. Administrators can create or delete other administrators, and change the password for any user in the system.
Domain Administrators
Domain administrators can perform administrative operations for users who belong to that domain. Domain administrators do not have visibility into physical servers or other domains.
Root Administrator
Root administrators have complete access to the system, including managing templates, service offerings, customer care administrators, and domains
The resources belong to the account, not individual users in that account. For example, billing, resource limits, and so on are maintained by the account, not the users. A user can operate on any resource in the account provided the user has privileges for that operation. The privileges are determined by the role.

10.2. Using an LDAP Server for User Authentication

You can use an external LDAP server such as Microsoft Active Directory or ApacheDS to authenticate CloudStack end-users. Just map CloudStack accounts to the corresponding LDAP accounts using a query filter. The query filter is written using the query syntax of the particular LDAP server, and can include special wildcard characters provided by CloudStack for matching common values such as the user’s email address and name. CloudStack will search the external LDAP directory tree starting at a specified base directory and return the distinguished name (DN) and password of the matching user. This information along with the given password is used to authenticate the user..
To set up LDAP authentication in CloudStack, call the CloudStack API command ldapConfig and provide the following:
  • Hostname or IP address and listening port of the LDAP server
  • Base directory and query filter
  • Search user DN credentials, which give CloudStack permission to search on the LDAP server
  • SSL keystore and password, if SSL is used

10.2.1. Example LDAP Configuration Commands

To understand the examples in this section, you need to know the basic concepts behind calling the CloudStack API, which are explained in the Developer’s Guide.
The following shows an example invocation of ldapConfig with an ApacheDS LDAP server
http://127.0.0.1:8080/client/api?command=ldapConfig&hostname=127.0.0.1&searchbase=ou%3Dtesting%2Co%3Dproject&queryfilter=%28%26%28uid%3D%25u%29%29&binddn=cn%3DJohn+Singh%2Cou%3Dtesting%2Co%project&bindpass=secret&port=10389&ssl=true&truststore=C%3A%2Fcompany%2Finfo%2Ftrusted.ks&truststorepass=secret&response=json&apiKey=YourAPIKey&signature=YourSignatureHash
The command must be URL-encoded. Here is the same example without the URL encoding:
http://127.0.0.1:8080/client/api?command=ldapConfig
&hostname=127.0.0.1
&searchbase=ou=testing,o=project
&queryfilter=(&(%uid=%u))
&binddn=cn=John+Singh,ou=testing,o=project
&bindpass=secret
&port=10389
&ssl=true
&truststore=C:/company/info/trusted.ks
&truststorepass=secret
&response=json
&apiKey=YourAPIKey&signature=YourSignatureHash
The following shows a similar command for Active Directory. Here, the search base is the testing group within a company, and the users are matched up based on email address.
http://10.147.29.101:8080/client/api?command=ldapConfig&hostname=10.147.28.250&searchbase=OU%3Dtesting%2CDC%3Dcompany&queryfilter=%28%26%28mail%3D%25e%29%29 &binddn=CN%3DAdministrator%2COU%3Dtesting%2CDC%3Dcompany&bindpass=1111_aaaa&port=389&response=json&apiKey=YourAPIKey&signature=YourSignatureHash
The next few sections explain some of the concepts you will need to know when filling out the ldapConfig parameters.

10.2.2. Search Base

An LDAP query is relative to a given node of the LDAP directory tree, called the search base. The search base is the distinguished name (DN) of a level of the directory tree below which all users can be found. The users can be in the immediate base directory or in some subdirectory. The search base may be equivalent to the organization, group, or domain name. The syntax for writing a DN varies depending on which LDAP server you are using. A full discussion of distinguished names is outside the scope of our documentation. The following table shows some examples of search bases to find users in the testing department..
LDAP Server
Example Search Base DN
ApacheDS
ou=testing,o=project
Active Directory
OU=testing, DC=company

10.2.3. Query Filter

The query filter is used to find a mapped user in the external LDAP server. The query filter should uniquely map the CloudStack user to LDAP user for a meaningful authentication. For more information about query filter syntax, consult the documentation for your LDAP server.
The CloudStack query filter wildcards are:
Query Filter Wildcard
Description
%u
User name
%e
Email address
%n
First and last name
The following examples assume you are using Active Directory, and refer to user attributes from the Active Directory schema.
If the CloudStack user name is the same as the LDAP user ID:
(uid=%u)
If the CloudStack user name is the LDAP display name:
(displayName=%u)
To find a user by email address:
(mail=%e)

10.2.4. Search User Bind DN

The bind DN is the user on the external LDAP server permitted to search the LDAP directory within the defined search base. When the DN is returned, the DN and passed password are used to authenticate the CloudStack user with an LDAP bind. A full discussion of bind DNs is outside the scope of our documentation. The following table shows some examples of bind DNs.
LDAP Server
Example Bind DN
ApacheDS
cn=Administrator,dc=testing,ou=project,ou=org
Active Directory
CN=Administrator, OU=testing, DC=company, DC=com

10.2.5. SSL Keystore Path and Password

If the LDAP server requires SSL, you need to enable it in the ldapConfig command by setting the parameters ssl, truststore, and truststorepass. Before enabling SSL for ldapConfig, you need to get the certificate which the LDAP server is using and add it to a trusted keystore. You will need to know the path to the keystore and the password.

Chapter 11. User Services Overview

In addition to the physical and logical infrastructure of your cloud, and the CloudStack software and servers, you also need a layer of user services so that people can actually make use of the cloud. This means not just a user UI, but a set of options and resources that users can choose from, such as templates for creating virtual machines, disk storage, and more. If you are running a commercial service, you will be keeping track of what services and resources users are consuming and charging them for that usage. Even if you do not charge anything for people to use your cloud – say, if the users are strictly internal to your organization, or just friends who are sharing your cloud – you can still keep track of what services they use and how much of them.

11.1. Service Offerings, Disk Offerings, Network Offerings, and Templates

A user creating a new instance can make a variety of choices about its characteristics and capabilities. CloudStack provides several ways to present users with choices when creating a new instance:
  • Service Offerings, defined by the CloudStack administrator, provide a choice of CPU speed, number of CPUs, RAM size, tags on the root disk, and other choices. See Creating a New Compute Offering.
  • Disk Offerings, defined by the CloudStack administrator, provide a choice of disk size for primary data storage. See Creating a New Disk Offering.
  • Network Offerings, defined by the CloudStack administrator, describe the feature set that is available to end users from the virtual router or external networking devices on a given guest network. See Network Offerings.
  • Templates, defined by the CloudStack administrator or by any CloudStack user, are the base OS images that the user can choose from when creating a new instance. For example, CloudStack includes CentOS as a template. See Working with Templates.
In addition to these choices that are provided for users, there is another type of service offering which is available only to the CloudStack root administrator, and is used for configuring virtual infrastructure resources. For more information, see Upgrading a Virtual Router with System Service Offerings.

Chapter 12. Using Projects to Organize Users and Resources

12.1. Overview of Projects

Projects are used to organize people and resources. CloudStack users within a single domain can group themselves into project teams so they can collaborate and share virtual resources such as VMs, snapshots, templates, data disks, and IP addresses. CloudStack tracks resource usage per project as well as per user, so the usage can be billed to either a user account or a project. For example, a private cloud within a software company might have all members of the QA department assigned to one project, so the company can track the resources used in testing while the project members can more easily isolate their efforts from other users of the same cloud
You can configure CloudStack to allow any user to create a new project, or you can restrict that ability to just CloudStack administrators. Once you have created a project, you become that project’s administrator, and you can add others within your domain to the project. CloudStack can be set up either so that you can add people directly to a project, or so that you have to send an invitation which the recipient must accept. Project members can view and manage all virtual resources created by anyone in the project (for example, share VMs). A user can be a member of any number of projects and can switch views in the CloudStack UI to show only project-related information, such as project VMs, fellow project members, project-related alerts, and so on.
The project administrator can pass on the role to another project member. The project administrator can also add more members, remove members from the project, set new resource limits (as long as they are below the global defaults set by the CloudStack administrator), and delete the project. When the administrator removes a member from the project, resources created by that user, such as VM instances, remain with the project. This brings us to the subject of resource ownership and which resources can be used by a project.
Resources created within a project are owned by the project, not by any particular CloudStack account, and they can be used only within the project. A user who belongs to one or more projects can still create resources outside of those projects, and those resources belong to the user’s account; they will not be counted against the project’s usage or resource limits. You can create project-level networks to isolate traffic within the project and provide network services such as port forwarding, load balancing, VPN, and static NAT. A project can also make use of certain types of resources from outside the project, if those resources are shared. For example, a shared network or public template is available to any project in the domain. A project can get access to a private template if the template’s owner will grant permission. A project can use any service offering or disk offering available in its domain; however, you can not create private service and disk offerings at the project level..

12.2. Configuring Projects

Before CloudStack users start using projects, the CloudStack administrator must set up various systems to support them, including membership invitations, limits on project resources, and controls on who can create projects.

12.2.1. Setting Up Invitations

CloudStack can be set up either so that project administrators can add people directly to a project, or so that it is necessary to send an invitation which the recipient must accept. The invitation can be sent by email or through the user’s CloudStack account. If you want administrators to use invitations to add members to projects, turn on and set up the invitations feature in CloudStack.
  1. Log in as administrator to the CloudStack UI.
  2. In the left navigation, click Global Settings.
  3. In the search box, type project and click the search button. searchbutton.png: Searches projects
  4. In the search results, you can see a few other parameters you need to set to control how invitations behave. The table below shows global configuration parameters related to project invitations. Click the edit button to set each parameter.
    Configuration Parameters
    Description
    project.invite.required
    Set to true to turn on the invitations feature.
    project.email.sender
    The email address to show in the From field of invitation emails.
    project.invite.timeout
    Amount of time to allow for a new member to respond to the invitation.
    project.smtp.host
    Name of the host that acts as an email server to handle invitations.
    project.smtp.password
    (Optional) Password required by the SMTP server. You must also set project.smtp.username and set project.smtp.useAuth to true.
    project.smtp.port
    SMTP server’s listening port.
    project.smtp.useAuth
    Set to true if the SMTP server requires a username and password.
    project.smtp.username
    (Optional) User name required by the SMTP server for authentication. You must also set project.smtp.password and set project.smtp.useAuth to true..
  5. Restart the Management Server:
    service cloud-management restart

12.2.2. Setting Resource Limits for Projects

The CloudStack administrator can set global default limits to control the amount of resources that can be owned by each project in the cloud. This serves to prevent uncontrolled usage of resources such as snapshots, IP addresses, and virtual machine instances. Domain administrators can override these resource limits for individual projects with their domains, as long as the new limits are below the global defaults set by the CloudStack root administrator. The root administrator can also set lower resource limits for any project in the cloud

12.2.2.1. Setting Per-Project Resource Limits

The CloudStack root administrator or the domain administrator of the domain where the project resides can set new resource limits for an individual project. The project owner can set resource limits only if the owner is also a domain or root administrator.
The new limits must be below the global default limits set by the CloudStack administrator (as described in Section 12.2.2, “Setting Resource Limits for Projects”). If the project already owns more of a given type of resource than the new maximum, the resources are not affected; however, the project can not add any new resources of that type until the total drops below the new limit.
  1. Log in as administrator to the CloudStack UI.
  2. In the left navigation, click Projects.
  3. In Select View, choose Projects.
  4. Click the name of the project you want to work with.
  5. Click the Resources tab. This tab lists the current maximum amount that the project is allowed to own for each type of resource.
  6. Type new values for one or more resources.
  7. Click Apply.

12.2.2.2. Setting the Global Project Resource Limits

  1. Log in as administrator to the CloudStack UI.
  2. In the left navigation, click Global Settings.
  3. In the search box, type max.projects and click the search button.
  4. In the search results, you will see the parameters you can use to set per-project maximum resource amounts that apply to all projects in the cloud. No project can have more resources, but an individual project can have lower limits. Click the edit button to set each parameter. editbutton.png: Edits parameters
    max.project.public.ips
    Maximum number of public IP addresses that can be owned by any project in the cloud. See About Public IP Addresses.
    max.project.snapshots
    Maximum number of snapshots that can be owned by any project in the cloud. See Working with Snapshots.
    max.project.templates
    Maximum number of templates that can be owned by any project in the cloud. See Working with Templates.
    max.project.uservms
    Maximum number of guest virtual machines that can be owned by any project in the cloud. See Working With Virtual Machines.
    max.project.volumes
    Maximum number of data volumes that can be owned by any project in the cloud. See Working with Volumes.
  5. Restart the Management Server.
    # service cloud-management restart

12.2.3. Setting Project Creator Permissions

You can configure CloudStack to allow any user to create a new project, or you can restrict that ability to just CloudStack administrators.
  1. Log in as administrator to the CloudStack UI.
  2. In the left navigation, click Global Settings.
  3. In the search box, type allow.user.create.projects.
  4. Click the edit button to set the parameter. editbutton.png: Edits parameters
    allow.user.create.projects
    Set to true to allow end users to create projects. Set to false if you want only the CloudStack root administrator and domain administrators to create projects.
  5. Restart the Management Server.
    # service cloud-management restart

12.3. Creating a New Project

CloudStack administrators and domain administrators can create projects. If the global configuration parameter allow.user.create.projects is set to true, end users can also create projects.
  1. Log in as administrator to the CloudStack UI.
  2. In the left navigation, click Projects.
  3. In Select view, click Projects.
  4. Click New Project.
  5. Give the project a name and description for display to users, then click Create Project.
  6. A screen appears where you can immediately add more members to the project. This is optional. Click Next when you are ready to move on.
  7. Click Save.

12.4. Adding Members to a Project

New members can be added to a project by the project’s administrator, the domain administrator of the domain where the project resides or any parent domain, or the CloudStack root administrator. There are two ways to add members in CloudStack, but only one way is enabled at a time:
  • If invitations have been enabled, you can send invitations to new members.
  • If invitations are not enabled, you can add members directly through the UI.

12.4.1. Sending Project Membership Invitations

Use these steps to add a new member to a project if the invitations feature is enabled in the cloud as described in Section 12.2.1, “Setting Up Invitations”. If the invitations feature is not turned on, use the procedure in Adding Project Members From the UI.
  1. Log in to the CloudStack UI.
  2. In the left navigation, click Projects.
  3. In Select View, choose Projects.
  4. Click the name of the project you want to work with.
  5. Click the Invitations tab.
  6. In Add by, select one of the following:
    1. Account – The invitation will appear in the user’s Invitations tab in the Project View. See Using the Project View.
    2. Email – The invitation will be sent to the user’s email address. Each emailed invitation includes a unique code called a token which the recipient will provide back to CloudStack when accepting the invitation. Email invitations will work only if the global parameters related to the SMTP server have been set. See Section 12.2.1, “Setting Up Invitations”.
  7. Type the user name or email address of the new member you want to add, and click Invite. Type the CloudStack user name if you chose Account in the previous step. If you chose Email, type the email address. You can invite only people who have an account in this cloud within the same domain as the project. However, you can send the invitation to any email address.
  8. To view and manage the invitations you have sent, return to this tab. When an invitation is accepted, the new member will appear in the project’s Accounts tab.

12.4.2. Adding Project Members From the UI

The steps below tell how to add a new member to a project if the invitations feature is not enabled in the cloud. If the invitations feature is enabled cloud,as described in Section 12.2.1, “Setting Up Invitations”, use the procedure in Section 12.4.1, “Sending Project Membership Invitations”.
  1. Log in to the CloudStack UI.
  2. In the left navigation, click Projects.
  3. In Select View, choose Projects.
  4. Click the name of the project you want to work with.
  5. Click the Accounts tab. The current members of the project are listed.
  6. Type the account name of the new member you want to add, and click Add Account. You can add only people who have an account in this cloud and within the same domain as the project.

12.5. Accepting a Membership Invitation

If you have received an invitation to join a CloudStack project, and you want to accept the invitation, follow these steps:
  1. Log in to the CloudStack UI.
  2. In the left navigation, click Projects.
  3. In Select View, choose Invitations.
  4. If you see the invitation listed onscreen, click the Accept button.
    Invitations listed on screen were sent to you using your CloudStack account name.
  5. If you received an email invitation, click the Enter Token button, and provide the project ID and unique ID code (token) from the email.

12.6. Suspending or Deleting a Project

When a project is suspended, it retains the resources it owns, but they can no longer be used. No new resources or members can be added to a suspended project.
When a project is deleted, its resources are destroyed, and member accounts are removed from the project. The project’s status is shown as Disabled pending final deletion.
A project can be suspended or deleted by the project administrator, the domain administrator of the domain the project belongs to or of its parent domain, or the CloudStack root administrator.
  1. Log in to the CloudStack UI.
  2. In the left navigation, click Projects.
  3. In Select View, choose Projects.
  4. Click the name of the project.
  5. Click one of the buttons:
    To delete, use deletebutton.png: Removes a project
    To suspend, use deletebutton.png: suspends a project

12.7. Using the Project View

If you are a member of a project, you can use CloudStack’s project view to see project members, resources consumed, and more. The project view shows only information related to one project. It is a useful way to filter out other information so you can concentrate on a project status and resources.
  1. Log in to the CloudStack UI.
  2. Click Project View.
  3. The project dashboard appears, showing the project’s VMs, volumes, users, events, network settings, and more. From the dashboard, you can:
    • Click the Accounts tab to view and manage project members. If you are the project administrator, you can add new members, remove members, or change the role of a member from user to admin. Only one member at a time can have the admin role, so if you set another user’s role to admin, your role will change to regular user.
    • (If invitations are enabled) Click the Invitations tab to view and manage invitations that have been sent to new project members but not yet accepted. Pending invitations will remain in this list until the new member accepts, the invitation timeout is reached, or you cancel the invitation.

Chapter 13. Service Offerings

In this chapter we discuss compute, disk, and system service offerings. Network offerings are discussed in the section on setting up networking for users.

13.1. Compute and Disk Service Offerings

A service offering is a set of virtual hardware features such as CPU core count and speed, memory, and disk size. The CloudStack administrator can set up various offerings, and then end users choose from the available offerings when they create a new VM. A service offering includes the following elements:
  • CPU, memory, and network resource guarantees
  • How resources are metered
  • How the resource usage is charged
  • How often the charges are generated
For example, one service offering might allow users to create a virtual machine instance that is equivalent to a 1 GHz Intel® Core™ 2 CPU, with 1 GB memory at $0.20/hour, with network traffic metered at $0.10/GB. Based on the user’s selected offering, CloudStack emits usage records that can be integrated with billing systems. CloudStack separates service offerings into compute offerings and disk offerings. The computing service offering specifies:
  • Guest CPU
  • Guest RAM
  • Guest Networking type (virtual or direct)
  • Tags on the root disk
The disk offering specifies:
  • Disk size (optional). An offering without a disk size will allow users to pick their own
  • Tags on the data disk

13.1.1. Creating a New Compute Offering

To create a new compute offering:
  1. Log in with admin privileges to the CloudStack UI.
  2. In the left navigation bar, click Service Offerings.
  3. In Select Offering, choose Compute Offering.
  4. Click Add Compute Offering.
  5. In the dialog, make the following choices:
    • Name: Any desired name for the service offering.
    • Description: A short description of the offering that can be displayed to users
    • Storage type: The type of disk that should be allocated. Local allocates from storage attached directly to the host where the system VM is running. Shared allocates from storage accessible via NFS.
    • # of CPU cores: The number of cores which should be allocated to a system VM with this offering
    • CPU (in MHz): The CPU speed of the cores that the system VM is allocated. For example, “2000” would provide for a 2 GHz clock.
    • Memory (in MB): The amount of memory in megabytes that the system VM should be allocated. For example, “2048” would provide for a 2 GB RAM allocation.
    • Network Rate: Allowed data transfer rate in MB per second.
    • Offer HA: If yes, the administrator can choose to have the system VM be monitored and as highly available as possible.
    • Storage Tags: The tags that should be associated with the primary storage used by the system VM.
    • Host Tags: (Optional) Any tags that you use to organize your hosts
    • CPU cap: Whether to limit the level of CPU usage even if spare capacity is available.
    • Public: Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; CloudStack will then prompt for the subdomain's name.
  6. Click Add.

13.1.2. Creating a New Disk Offering

To create a system service offering:
  1. Log in with admin privileges to the CloudStack UI.
  2. In the left navigation bar, click Service Offerings.
  3. In Select Offering, choose Disk Offering.
  4. Click Add Disk Offering.
  5. In the dialog, make the following choices:
    • Name. Any desired name for the system offering.
    • Description. A short description of the offering that can be displayed to users
    • Custom Disk Size. If checked, the user can set their own disk size. If not checked, the root administrator must define a value in Disk Size.
    • Disk Size. Appears only if Custom Disk Size is not selected. Define the volume size in GB.
    • (Optional)Storage Tags. The tags that should be associated with the primary storage for this disk. Tags are a comma separated list of attributes of the storage. For example "ssd,blue". Tags are also added on Primary Storage. CloudStack matches tags on a disk offering to tags on the storage. If a tag is present on a disk offering that tag (or tags) must also be present on Primary Storage for the volume to be provisioned. If no such primary storage exists, allocation from the disk offering will fail..
    • Public. Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; CloudStack will then prompt for the subdomain's name.
  6. Click Add.

13.1.3. Modifying or Deleting a Service Offering

Service offerings cannot be changed once created. This applies to both compute offerings and disk offerings.
A service offering can be deleted. If it is no longer in use, it is deleted immediately and permanently. If the service offering is still in use, it will remain in the database until all the virtual machines referencing it have been deleted. After deletion by the administrator, a service offering will not be available to end users that are creating new instances.

13.2. System Service Offerings

System service offerings provide a choice of CPU speed, number of CPUs, tags, and RAM size, just as other service offerings do. But rather than being used for virtual machine instances and exposed to users, system service offerings are used to change the default properties of virtual routers, console proxies, and other system VMs. System service offerings are visible only to the CloudStack root administrator. CloudStack provides default system service offerings. The CloudStack root administrator can create additional custom system service offerings.
When CloudStack creates a virtual router for a guest network, it uses default settings which are defined in the system service offering associated with the network offering. You can upgrade the capabilities of the virtual router by applying a new network offering that contains a different system service offering. All virtual routers in that network will begin using the settings from the new service offering.

13.2.1. Creating a New System Service Offering

To create a system service offering:
  1. Log in with admin privileges to the CloudStack UI.
  2. In the left navigation bar, click Service Offerings.
  3. In Select Offering, choose System Offering.
  4. Click Add System Service Offering.
  5. In the dialog, make the following choices:
    • Name. Any desired name for the system offering.
    • Description. A short description of the offering that can be displayed to users
    • System VM Type. Select the type of system virtual machine that this offering is intended to support.
    • Storage type. The type of disk that should be allocated. Local allocates from storage attached directly to the host where the system VM is running. Shared allocates from storage accessible via NFS.
    • # of CPU cores. The number of cores which should be allocated to a system VM with this offering
    • CPU (in MHz). The CPU speed of the cores that the system VM is allocated. For example, "2000" would provide for a 2 GHz clock.
    • Memory (in MB). The amount of memory in megabytes that the system VM should be allocated. For example, "2048" would provide for a 2 GB RAM allocation.
    • Network Rate. Allowed data transfer rate in MB per second.
    • Offer HA. If yes, the administrator can choose to have the system VM be monitored and as highly available as possible.
    • Storage Tags. The tags that should be associated with the primary storage used by the system VM.
    • Host Tags. (Optional) Any tags that you use to organize your hosts
    • CPU cap. Whether to limit the level of CPU usage even if spare capacity is available.
    • Public. Indicate whether the service offering should be available all domains or only some domains. Choose Yes to make it available to all domains. Choose No to limit the scope to a subdomain; CloudStack will then prompt for the subdomain's name.
  6. Click Add.

13.3. Network Throttling

Network throttling is the process of controlling the network access and bandwidth usage based on certain rules. CloudStack controls this behaviour of the guest networks in the cloud by using the network rate parameter. This parameter is defined as the default data transfer rate in Mbps (Megabits Per Second) allowed in a guest network. It defines the upper limits for network utilization. If the current utilization is below the allowed upper limits, access is granted, else revoked.
You can throttle the network bandwidth either to control the usage above a certain limit for some accounts, or to control network congestion in a large cloud environment. The network rate for your cloud can be configured on the following:
  • Network Offering
  • Service Offering
  • Global parameter
If network rate is set to NULL in service offering, the value provided in the vm.network.throttling.rate global parameter is applied. If the value is set to NULL for network offering, the value provided in the network.throttling.rate global parameter is considered.
For the default public, storage, and management networks, network rate is set to 0. This implies that the public, storage, and management networks will have unlimited bandwidth by default. For default guest networks, network rate is set to NULL. In this case, network rate is defaulted to the global parameter value.
The following table gives you an overview of how network rate is applied on different types of networks in CloudStack.
Networks
Network Rate Is Taken from
Guest network of Virtual Router
Guest Network Offering
Public network of Virtual Router
Guest Network Offering
Storage network of Secondary Storage VM
System Network Offering
Management network of Secondary Storage VM
System Network Offering
Storage network of Console Proxy VM
System Network Offering
Management network of Console Proxy VM
System Network Offering
Storage network of Virtual Router
System Network Offering
Management network of Virtual Router
System Network Offering
Public network of Secondary Storage VM
System Network Offering
Public network of Console Proxy VM
System Network Offering
Default network of a guest VM
Compute Offering
Additional networks of a guest VM
Corresponding Network Offerings
A guest VM must have a default network, and can also have many additional networks. Depending on various parameters, such as the host and virtual switch used, you can observe a difference in the network rate in your cloud. For example, on a VMware host the actual network rate varies based on where they are configured (compute offering, network offering, or both); the network type (shared or isolated); and traffic direction (ingress or egress).
The network rate set for a network offering used by a particular network in CloudStack is used for the traffic shaping policy of a port group, for example: port group A, for that network: a particular subnet or VLAN on the actual network. The virtual routers for that network connects to the port group A, and by default instances in that network connects to this port group. However, if an instance is deployed with a compute offering with the network rate set, and if this rate is used for the traffic shaping policy of another port group for the network, for example port group B, then instances using this compute offering are connected to the port group B, instead of connecting to port group A.
The traffic shaping policy on standard port groups in VMware only applies to the egress traffic, and the net effect depends on the type of network used in CloudStack. In shared networks, ingress traffic is unlimited for CloudStack, and egress traffic is limited to the rate that applies to the port group used by the instance if any. If the compute offering has a network rate configured, this rate applies to the egress traffic, otherwise the network rate set for the network offering applies. For isolated networks, the network rate set for the network offering, if any, effectively applies to the ingress traffic. This is mainly because the network rate set for the network offering applies to the egress traffic from the virtual router to the instance. The egress traffic is limited by the rate that applies to the port group used by the instance if any, similar to shared networks.
For example:
Network rate of network offering = 10 Mbps
Network rate of compute offering = 200 Mbps
In shared networks, ingress traffic will not be limited for CloudStack, while egress traffic will be limited to 200 Mbps. In an isolated network, ingress traffic will be limited to 10 Mbps and egress to 200 Mbps.

13.4. Changing the Default System Offering for System VMs

You can manually change the system offering for a particular System VM. Additionally, as a CloudStack administrator, you can also change the default system offering used for System VMs.
  1. Create a new system offering.
  2. Back up the database:
    mysqldump -u root -p cloud | bzip2 > cloud_backup.sql.bz2
  3. Open an MySQL prompt:
    mysql -u cloud -p cloud
  4. Run the following queries on the cloud database.
    1. In the disk_offering table, identify the original default offering and the new offering you want to use by default.
      Take a note of the ID of the new offering.
      select id,name,unique_name,type from disk_offering;
    2. For the original default offering, set the value of unique_name to NULL.
      # update disk_offering set unique_name = NULL where id = 10;
      Ensure that you use the correct value for the ID.
    3. For the new offering that you want to use by default, set the value of unique_name as follows:
      For the default Console Proxy VM (CPVM) offering,set unique_name to 'Cloud.com-ConsoleProxy'. For the default Secondary Storage VM (SSVM) offering, set unique_name to 'Cloud.com-SecondaryStorage'. For example:
      update disk_offering set unique_name = 'Cloud.com-ConsoleProxy' where id = 16;
  5. Restart CloudStack Management Server. Restarting is required because the default offerings are loaded into the memory at startup.
    service cloud-management restart
  6. Destroy the existing CPVM or SSVM offerings and wait for them to be recreated. The new CPVM or SSVM are configured with the new offering.

Chapter 14. Setting Up Networking for Users

14.1. Overview of Setting Up Networking for Users

People using cloud infrastructure have a variety of needs and preferences when it comes to the networking services provided by the cloud. As a CloudStack administrator, you can do the following things to set up networking for your users:
  • Set up physical networks in zones
  • Set up several different providers for the same service on a single physical network (for example, both Cisco and Juniper firewalls)
  • Bundle different types of network services into network offerings, so users can choose the desired network services for any given virtual machine
  • Add new network offerings as time goes on so end users can upgrade to a better class of service on their network
  • Provide more ways for a network to be accessed by a user, such as through a project of which the user is a member

14.2. About Virtual Networks

A virtual network is a logical construct that enables multi-tenancy on a single physical network. In CloudStack a virtual network can be shared or isolated.

14.2.1. Isolated Networks

An isolated network can be accessed only by virtual machines of a single account. Isolated networks have the following properties.
  • Resources such as VLAN are allocated and garbage collected dynamically
  • There is one network offering for the entire network
  • The network offering can be upgraded or downgraded but it is for the entire network

14.2.2. Shared Networks

A shared network can be accessed by virtual machines that belong to many different accounts. Network Isolation on shared networks is accomplished using techniques such as security groups (supported only in basic zones).
  • Shared Networks are created by the administrator
  • Shared Networks can be designated to a certain domain
  • Shared Network resources such as VLAN and physical network that it maps to are designated by the administrator
  • Shared Networks are isolated by security groups
  • Public Network is a shared network that is not shown to the end users

14.2.3. Runtime Allocation of Virtual Network Resources

When you define a new virtual network, all your settings for that network are stored in CloudStack. The actual network resources are activated only when the first virtual machine starts in the network. When all virtual machines have left the virtual network, the network resources are garbage collected so they can be allocated again. This helps to conserve network resources.

14.3. Network Service Providers

Note

For the most up-to-date list of supported network service providers, see the CloudStack UI or call listNetworkServiceProviders.
A service provider (also called a network element) is hardware or virtual appliance that makes a network service possible; for example, a firewall appliance can be installed in the cloud to provide firewall service. On a single network, multiple providers can provide the same network service. For example, a firewall service may be provided by Cisco or Juniper devices in the same physical network.
You can have multiple instances of the same service provider in a network (say, more than one Juniper SRX device).
If different providers are set up to provide the same service on the network, the administrator can create network offerings so users can specify which network service provider they prefer (along with the other choices offered in network offerings). Otherwise, CloudStack will choose which provider to use whenever the service is called for.
Supported Network Service Providers
CloudStack ships with an internal list of the supported service providers, and you can choose from this list when creating a network offering.
Virtual Router
Citrix NetScaler
Juniper SRX
F5 BigIP
Host based (KVM/Xen)
Remote Access VPN
Yes
No
No
No
No
DNS/DHCP/User Data
Yes
No
No
No
No
Firewall
Yes
No
Yes
No
No
Load Balancing
Yes
Yes
No
Yes
No
Elastic IP
No
Yes
No
No
No
Elastic LB
No
Yes
No
No
No
Source NAT
Yes
No
Yes
No
No
Static NAT
Yes
Yes
Yes
No
No
Port Forwarding
Yes
No
Yes
No
No

14.4. Network Offerings

Note

For the most up-to-date list of supported network services, see the CloudStack UI or call listNetworkServices.
A network offering is a named set of network services, such as:
  • DHCP
  • DNS
  • Source NAT
  • Static NAT
  • Port Forwarding
  • Load Balancing
  • Firewall
  • VPN
  • Optional) Name one of several available providers to use for a given service, such as Juniper for the firewall
  • (Optional) Network tag to specify which physical network to use
When creating a new VM, the user chooses one of the available network offerings, and that determines which network services the VM can use.
The CloudStack administrator can create any number of custom network offerings, in addition to the default network offerings provided by CloudStack. By creating multiple custom network offerings, you can set up your cloud to offer different classes of service on a single multi-tenant physical network. For example, while the underlying physical wiring may be the same for two tenants, tenant A may only need simple firewall protection for their website, while tenant B may be running a web server farm and require a scalable firewall solution, load balancing solution, and alternate networks for accessing the database backend.

Note

If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudStack virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function.
When creating a new virtual network, the CloudStack administrator chooses which network offering to enable for that network. Each virtual network is associated with one network offering. A virtual network can be upgraded or downgraded by changing its associated network offering. If you do this, be sure to reprogram the physical network to match.
CloudStack also has internal network offerings for use by CloudStack system VMs. These network offerings are not visible to users but can be modified by administrators.

14.4.1. Creating a New Network Offering

To create a network offering:
  1. Log in with admin privileges to the CloudStack UI.
  2. In the left navigation bar, click Service Offerings.
  3. In Select Offering, choose Network Offering.
  4. Click Add Network Offering.
  5. In the dialog, make the following choices:
    • Name. Any desired name for the network offering.
    • Description. A short description of the offering that can be displayed to users.
    • Network Rate. Allowed data transfer rate in MB per second.
    • Guest Type. Choose whether the guest network is isolated or shared.
      For a description of this term, see the Administration Guide.
    • Persistent. Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network. For more information, see Section 20.20, “Persistent Networks”.
    • Specify VLAN. (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used.
    • VPC. This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see Section 20.19.1, “About Virtual Private Clouds”.
    • Supported Services. Select one or more of the possible network services. For some services, you must also choose the service provider; for example, if you select Load Balancer, you can choose the CloudStack virtual router or any other load balancers that have been configured in the cloud. Depending on which services you choose, additional fields may appear in the rest of the dialog box.
      Based on the guest network type selected, you can see the following supported services:
      Supported Services
      Description
      Isolated
      Shared
      DHCP
      For more information, see Section 20.16, “DNS and DHCP”.
      Supported
      Supported
      DNS
      For more information, see Section 20.16, “DNS and DHCP”.
      Supported
      Supported
      Load Balancer
      If you select Load Balancer, you can choose the CloudStack virtual router or any other load balancers that have been configured in the cloud.
      Supported
      Supported
      Firewall
      For more information.
      Supported
      Supported
      Source NAT
      If you select Source NAT, you can choose the CloudStack virtual router or any other Source NAT providers that have been configured in the cloud.
      Supported
      Supported
      Static NAT
      If you select Static NAT, you can choose the CloudStack virtual router or any other Static NAT providers that have been configured in the cloud.
      Supported
      Supported
      Port Forwarding
      If you select Port Forwarding, you can choose the CloudStack virtual router or any other Port Forwarding providers that have been configured in the cloud.
      Supported
      Not Supported
      VPN
      For more information, see Section 20.17, “VPN”.
      Supported
      Not Supported
      User Data
      For more information, see the Administration Guide.
      Not Supported
      Supported
      Network ACL
      Supported
      Not Supported
      Security Groups
      Not Supported
      Supported
    • System Offering. If the service provider for any of the services selected in Supported Services is a virtual router, the System Offering field appears. Choose the system service offering that you want virtual routers to use in this network. For example, if you selected Load Balancer in Supported Services and selected a virtual router to provide load balancing, the System Offering field appears so you can choose between the CloudStack default system service offering and any custom system service offerings that have been defined by the CloudStack root administrator.
      For more information, see the Administration Guide.
    • Redundant router capability. Available only when Virtual Router is selected as the Source NAT provider. Select this option if you want to use two virtual routers in the network for uninterrupted connection: one operating as the master virtual router and the other as the backup. The master virtual router receives requests from and sends responses to the user’s VM. The backup virtual router is activated only when the master is down. After the failover, the backup becomes the master virtual router. CloudStack deploys the routers on different hosts to ensure reliability if one host is down.
    • Conserve mode. Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network. When conservative mode is off, the public IP can only be used for a single service. For example, a public IP used for a port forwarding rule cannot be used for defining other services, such as SaticNAT or load balancing. When the conserve mode is on, you can define more than one service on the same public IP.

      Note

      If StaticNAT is enabled, irrespective of the status of the conserve mode, no port forwarding or load balancing rule can be created for the IP. However, you can add the firewall rules by using the createFirewallRule command.
    • Tags. Network tag to specify which physical network to use.
  6. Click Add.

Chapter 15. Working With Virtual Machines

15.1. About Working with Virtual Machines

CloudStack provides administrators with complete control over the lifecycle of all guest VMs executing in the cloud. CloudStack provides several guest management operations for end users and administrators. VMs may be stopped, started, rebooted, and destroyed.
Guest VMs have a name and group. VM names and groups are opaque to CloudStack and are available for end users to organize their VMs. Each VM can have three names for use in different contexts. Only two of these names can be controlled by the user:
  • Instance name – a unique, immutable ID that is generated by CloudStack, and can not be modified by the user. This name conforms to the requirements in IETF RFC 1123.
  • Display name – the name displayed in the CloudStack web UI. Can be set by the user. Defaults to instance name.
  • Name – host name that the DHCP server assigns to the VM. Can be set by the user. Defaults to instance name
Guest VMs can be configured to be Highly Available (HA). An HA-enabled VM is monitored by the system. If the system detects that the VM is down, it will attempt to restart the VM, possibly on a different host. For more information, see HA-Enabled Virtual Machines on
Each new VM is allocated one public IP address. When the VM is started, CloudStack automatically creates a static NAT between this public IP address and the private IP address of the VM.
If elastic IP is in use (with the NetScaler load balancer), the IP address initially allocated to the new VM is not marked as elastic. The user must replace the automatically configured IP with a specifically acquired elastic IP, and set up the static NAT mapping between this new IP and the guest VM’s private IP. The VM’s original IP address is then released and returned to the pool of available public IPs.
CloudStack cannot distinguish a guest VM that was shut down by the user (such as with the “shutdown” command in Linux) from a VM that shut down unexpectedly. If an HA-enabled VM is shut down from inside the VM, CloudStack will restart it. To shut down an HA-enabled VM, you must go through the CloudStack UI or API.

15.2. Best Practices for Virtual Machines

The CloudStack administrator should monitor the total number of VM instances in each cluster, and disable allocation to the cluster if the total is approaching the maximum that the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of one or more hosts failing, which would increase the VM load on the other hosts as the VMs are automatically redeployed. Consult the documentation for your chosen hypervisor to find the maximum permitted number of VMs per host, then use CloudStack global configuration settings to set this as the default limit. Monitor the VM activity in each cluster at all times. Keep the total number of VMs below a safe level that allows for the occasional host failure. For example, if there are N hosts in the cluster, and you want to allow for one host in the cluster to be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this number of VMs, use the CloudStack UI to disable allocation of more VMs to the cluster.

15.3. VM Lifecycle

Virtual machines can be in the following states:
basic-deployment.png: Basic two-machine CloudStack deployment
Once a virtual machine is destroyed, it cannot be recovered. All the resources used by the virtual machine will be reclaimed by the system. This includes the virtual machine’s IP address.
A stop will attempt to gracefully shut down the operating system, which typically involves terminating all the running applications. If the operation system cannot be stopped, it will be forcefully terminated. This has the same effect as pulling the power cord to a physical machine.
A reboot is a stop followed by a start.
CloudStack preserves the state of the virtual machine hard disk until the machine is destroyed.
A running virtual machine may fail because of hardware or network issues. A failed virtual machine is in the down state.
The system places the virtual machine into the down state if it does not receive the heartbeat from the hypervisor for three minutes.
The user can manually restart the virtual machine from the down state.
The system will start the virtual machine from the down state automatically if the virtual machine is marked as HA-enabled.

15.4. Creating VMs

Virtual machines are usually created from a template. Users can also create blank virtual machines. A blank virtual machine is a virtual machine without an OS template. Users can attach an ISO file and install the OS from the CD/DVD-ROM.

Note

You can create a VM without starting it. You can determine whether the VM needs to be started as part of the VM deployment. A request parameter, startVM, in the deployVm API provides this feature. For more information, see the Developer's Guide
To create a VM from a template:
  1. Log in to the CloudStack UI as an administrator or user.
  2. In the left navigation bar, click Instances.
  3. Click Add Instance.
  4. Select a zone.
  5. Select a template, then follow the steps in the wizard. For more information about how the templates came to be in this list, see Chapter 17, Working with Templates.
  6. Be sure that the hardware you have allows starting the selected service offering.
  7. Click Submit and your VM will be created and started.

    Note

    For security reason, the internal name of the VM is visible only to the root admin.
To create a VM from an ISO:

Note

(XenServer) Windows VMs running on XenServer require PV drivers, which may be provided in the template or added after the VM is created. The PV drivers are necessary for essential management functions such as mounting additional volumes and ISO images, live migration, and graceful shutdown.
  1. Log in to the CloudStack UI as an administrator or user.
  2. In the left navigation bar, click Instances.
  3. Click Add Instance.
  4. Select a zone.
  5. Select ISO Boot, and follow the steps in the wizard.
  6. Click Submit and your VM will be created and started.

15.5. Accessing VMs

Any user can access their own virtual machines. The administrator can access all VMs running in the cloud.
To access a VM through the CloudStack UI:
  1. Log in to the CloudStack UI as a user or admin.
  2. Click Instances, then click the name of a running VM.
  3. Click the View Console button .
To access a VM directly over the network:
  1. The VM must have some port open to incoming traffic. For example, in a basic zone, a new VM might be assigned to a security group which allows incoming traffic. This depends on what security group you picked when creating the VM. In other cases, you can open a port by setting up a port forwarding policy. See Section 20.14, “IP Forwarding and Firewalling”.
  2. If a port is open but you can not access the VM using ssh, it’s possible that ssh is not already enabled on the VM. This will depend on whether ssh is enabled in the template you picked when creating the VM. Access the VM through the CloudStack UI and enable ssh on the machine using the commands for the VM’s operating system.
  3. If the network has an external firewall device, you will need to create a firewall rule to allow access. See Section 20.14, “IP Forwarding and Firewalling”.

15.6. Stopping and Starting VMs

Once a VM instance is created, you can stop, restart, or delete it as needed. In the CloudStack UI, click Instances, select the VM, and use the Stop, Start, Reboot, and Destroy links.

15.7. Changing the VM Name, OS, or Group

After a VM is created, you can modify the display name, operating system, and the group it belongs to.
To access a VM through the CloudStack UI:
  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation, click Instances.
  3. Select the VM that you want to modify.
  4. Click the Stop button to stop the VM. StopButton.png: button to stop a VM
  5. Click Edit. EditButton.png: button to edit the properties of a VM
  6. Make the desired changes to the following:
  7. Display name: Enter a new display name if you want to change the name of the VM.
  8. OS Type: Select the desired operating system.
  9. Group: Enter the group name for the VM.
  10. Click Apply.

15.8. Changing the Service Offering for a VM

To upgrade or downgrade the level of compute resources available to a virtual machine, you can change the VM's compute offering.
  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation, click Instances.
  3. Choose the VM that you want to work with.
  4. Click the Stop button to stop the VM. StopButton.png: button to stop a VM
  5. Click the Change Service button. ChangeServiceButton.png: button to change the service of a VM
    The Change service dialog box is displayed.
  6. Select the offering you want to apply to the selected VM.
  7. Click OK.

15.9. Moving VMs Between Hosts (Manual Live Migration)

The CloudStack administrator can move a running VM from one host to another without interrupting service to users or going into maintenance mode. This is called manual live migration, and can be done under the following conditions:
  • The root administrator is logged in. Domain admins and users can not perform manual live migration of VMs.
  • The VM is running. Stopped VMs can not be live migrated.
  • The destination host must be in the same cluster as the original host.
  • The VM must not be using local disk storage.
  • The destination host must have enough available capacity. If not, the VM will remain in the "migrating" state until memory becomes available.
To manually live migrate a virtual machine
  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation, click Instances.
  3. Choose the VM that you want to migrate.
  4. Click the Migrate Instance button. Migrateinstance.png: button to migrate an instance
  5. From the list of hosts, choose the one to which you want to move the VM.
  6. Click OK.

15.10. Deleting VMs

Users can delete their own virtual machines. A running virtual machine will be abruptly stopped before it is deleted. Administrators can delete any virtual machines.
To delete a virtual machine:
  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation, click Instances.
  3. Choose the VM that you want to delete.
  4. Click the Destroy Instance button. Destroyinstance.png: button to destroy an instance

15.11. Working with ISOs

CloudStack supports ISOs and their attachment to guest VMs. An ISO is a read-only file that has an ISO/CD-ROM style file system. Users can upload their own ISOs and mount them on their guest VMs.
ISOs are uploaded based on a URL. HTTP is the supported protocol. Once the ISO is available via HTTP specify an upload URL such as http://my.web.server/filename.iso.
ISOs may be public or private, like templates.ISOs are not hypervisor-specific. That is, a guest on vSphere can mount the exact same image that a guest on KVM can mount.
ISO images may be stored in the system and made available with a privacy level similar to templates. ISO images are classified as either bootable or not bootable. A bootable ISO image is one that contains an OS image. CloudStack allows a user to boot a guest VM off of an ISO image. Users can also attach ISO images to guest VMs. For example, this enables installing PV drivers into Windows. ISO images are not hypervisor-specific.

15.11.1. Adding an ISO

To make additional operating system or other software available for use with guest VMs, you can add an ISO. The ISO is typically thought of as an operating system image, but you can also add ISOs for other types of software, such as desktop applications that you want to be installed as part of a template.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation bar, click Templates.
  3. In Select View, choose ISOs.
  4. Click Add ISO.
  5. In the Add ISO screen, provide the following:
    • Name: Short name for the ISO image. For example, CentOS 6.2 64-bit.
    • Description: Display test for the ISO image. For example, CentOS 6.2 64-bit.
    • URL: The URL that hosts the ISO image. The Management Server must be able to access this location via HTTP. If needed you can place the ISO image directly on the Management Server
    • Zone: Choose the zone where you want the ISO to be available, or All Zones to make it available throughout CloudStack.
    • Bootable: Whether or not a guest could boot off this ISO image. For example, a CentOS ISO is bootable, a Microsoft Office ISO is not bootable.
    • OS Type: This helps CloudStack and the hypervisor perform certain operations and make assumptions that improve the performance of the guest. Select one of the following.
      • If the operating system of your desired ISO image is listed, choose it.
      • If the OS Type of the ISO is not listed or if the ISO is not bootable, choose Other.
      • (XenServer only) If you want to boot from this ISO in PV mode, choose Other PV (32-bit) or Other PV (64-bit)
      • (KVM only) If you choose an OS that is PV-enabled, the VMs created from this ISO will have a SCSI (virtio) root disk. If the OS is not PV-enabled, the VMs will have an IDE root disk. The PV-enabled types are:
        Fedora 13
        Fedora 12
        Fedora 11
        Fedora 10
        Fedora 9
        Other PV
        Debian GNU/Linux
        CentOS 5.3
        CentOS 5.4
        CentOS 5.5
        Red Hat Enterprise Linux 5.3
        Red Hat Enterprise Linux 5.4
        Red Hat Enterprise Linux 5.5
        Red Hat Enterprise Linux 6

      Note

      It is not recommended to choose an older version of the OS than the version in the image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will usually not work. In these cases, choose Other.
    • Extractable: Choose Yes if the ISO should be available for extraction.
    • Public: Choose Yes if this ISO should be available to other users.
    • Featured: Choose Yes if you would like this ISO to be more prominent for users to select. The ISO will appear in the Featured ISOs list. Only an administrator can make an ISO Featured.
  6. Click OK.
    The Management Server will download the ISO. Depending on the size of the ISO, this may take a long time. The ISO status column will display Ready once it has been successfully downloaded into secondary storage. Clicking Refresh updates the download percentage.
  7. Important: Wait for the ISO to finish downloading. If you move on to the next task and try to use the ISO right away, it will appear to fail. The entire ISO must be available before CloudStack can work with it.

15.11.2. Attaching an ISO to a VM

  1. In the left navigation, click Instances.
  2. Choose the virtual machine you want to work with.
  3. Click the Attach ISO button. iso.png: depicts adding an iso image
  4. In the Attach ISO dialog box, select the desired ISO.
  5. Click OK.

Chapter 16. Working With Hosts

16.1. Adding Hosts

Additional hosts can be added at any time to provide more capacity for guest VMs. For requirements and instructions, see Section 5.6, “Adding a Host”.

16.2. Scheduled Maintenance and Maintenance Mode for Hosts

You can place a host into maintenance mode. When maintenance mode is activated, the host becomes unavailable to receive new guest VMs, and the guest VMs already running on the host are seamlessly migrated to another host not in maintenance mode. This migration uses live migration technology and does not interrupt the execution of the guest.

16.2.1. vCenter and Maintenance Mode

To enter maintenance mode on a vCenter host, both vCenter and CloudStack must be used in concert. CloudStack and vCenter have separate maintenance modes that work closely together.
  1. Place the host into CloudStack's "scheduled maintenance" mode. This does not invoke the vCenter maintenance mode, but only causes VMs to be migrated off the host
    When the CloudStack maintenance mode is requested, the host first moves into the Prepare for Maintenance state. In this state it cannot be the target of new guest VM starts. Then all VMs will be migrated off the server. Live migration will be used to move VMs off the host. This allows the guests to be migrated to other hosts with no disruption to the guests. After this migration is completed, the host will enter the Ready for Maintenance mode.
  2. Wait for the "Ready for Maintenance" indicator to appear in the UI.
  3. Now use vCenter to perform whatever actions are necessary to maintain the host. During this time, the host cannot be the target of new VM allocations.
  4. When the maintenance tasks are complete, take the host out of maintenance mode as follows:
    1. First use vCenter to exit the vCenter maintenance mode.
      This makes the host ready for CloudStack to reactivate it.
    2. Then use CloudStack's administrator UI to cancel the CloudStack maintenance mode
      When the host comes back online, the VMs that were migrated off of it may be migrated back to it manually and new VMs can be added.

16.2.2. XenServer and Maintenance Mode

For XenServer, you can take a server offline temporarily by using the Maintenance Mode feature in XenCenter. When you place a server into Maintenance Mode, all running VMs are automatically migrated from it to another host in the same pool. If the server is the pool master, a new master will also be selected for the pool. While a server is Maintenance Mode, you cannot create or start any VMs on it.
To place a server in Maintenance Mode:
  1. In the Resources pane, select the server, then do one of the following:
    • Right-click, then click Enter Maintenance Mode on the shortcut menu.
    • On the Server menu, click Enter Maintenance Mode.
  2. Click Enter Maintenance Mode.
The server's status in the Resources pane shows when all running VMs have been successfully migrated off the server.
To take a server out of Maintenance Mode:
  1. In the Resources pane, select the server, then do one of the following:
    • Right-click, then click Exit Maintenance Mode on the shortcut menu.
    • On the Server menu, click Exit Maintenance Mode.
  2. Click Exit Maintenance Mode.

16.3. Disabling and Enabling Zones, Pods, and Clusters

You can enable or disable a zone, pod, or cluster without permanently removing it from the cloud. This is useful for maintenance or when there are problems that make a portion of the cloud infrastructure unreliable. No new allocations will be made to a disabled zone, pod, or cluster until its state is returned to Enabled. When a zone, pod, or cluster is first added to the cloud, it is Disabled by default.
To disable and enable a zone, pod, or cluster:
  1. Log in to the CloudStack UI as administrator
  2. In the left navigation bar, click Infrastructure.
  3. In Zones, click View More.
  4. If you are disabling or enabling a zone, find the name of the zone in the list, and click the Enable/Disable button. enable-disable.png: button to enable or disable zone, pod, or cluster.
  5. If you are disabling or enabling a pod or cluster, click the name of the zone that contains the pod or cluster.
  6. Click the Compute tab.
  7. In the Pods or Clusters node of the diagram, click View All.
  8. Click the pod or cluster name in the list.
  9. Click the Enable/Disable button.

16.4. Removing Hosts

Hosts can be removed from the cloud as needed. The procedure to remove a host depends on the hypervisor type.

16.4.1. Removing XenServer and KVM Hosts

A node cannot be removed from a cluster until it has been placed in maintenance mode. This will ensure that all of the VMs on it have been migrated to other Hosts. To remove a Host from the cloud:
  1. Place the node in maintenance mode.
  2. For KVM, stop the cloud-agent service.
  3. Use the UI option to remove the node.
    Then you may power down the Host, re-use its IP address, re-install it, etc

16.4.2. Removing vSphere Hosts

To remove this type of host, first place it in maintenance mode, as described in Section 16.2, “Scheduled Maintenance and Maintenance Mode for Hosts”. Then use CloudStack to remove the host. CloudStack will not direct commands to a host that has been removed using CloudStack. However, the host may still exist in the vCenter cluster.

16.5. Re-Installing Hosts

You can re-install a host after placing it in maintenance mode and then removing it. If a host is down and cannot be placed in maintenance mode, it should still be removed before the re-install.

16.6. Maintaining Hypervisors on Hosts

When running hypervisor software on hosts, be sure all the hotfixes provided by the hypervisor vendor are applied. Track the release of hypervisor patches through your hypervisor vendor’s support channel, and apply patches as soon as possible after they are released. CloudStack will not track or notify you of required hypervisor patches. It is essential that your hosts are completely up to date with the provided hypervisor patches. The hypervisor vendor is likely to refuse to support any system that is not up to date with patches.

Note

The lack of up-do-date hotfixes can lead to data corruption and lost VMs.

16.7. Changing Host Password

The password for a XenServer Node, KVM Node, or vSphere Node may be changed in the database. Note that all Nodes in a Cluster must have the same password.
To change a Node's password:
  1. Identify all hosts in the cluster.
  2. Change the password on all hosts in the cluster. Now the password for the host and the password known to CloudStack will not match. Operations on the cluster will fail until the two passwords match.
  3. Get the list of host IDs for the host in the cluster where you are changing the password. You will need to access the database to determine these host IDs. For each hostname "h" (or vSphere cluster) that you are changing the password for, execute:
    mysql> select id from cloud.host where name like '%h%';
  4. This should return a single ID. Record the set of such IDs for these hosts.
  5. Update the passwords for the host in the database. In this example, we change the passwords for hosts with IDs 5, 10, and 12 to "password".
    mysql> update cloud.host set password='password' where id=5 or id=10 or id=12;

16.8. Host Allocation

The system automatically picks the most appropriate host to run each virtual machine. End users may specify the zone in which the virtual machine will be created. End users do not have control over which host will run the virtual machine instance.
CloudStack administrators can specify that certain hosts should have a preference for particular types of guest instances. For example, an administrator could state that a host should have a preference to run Windows guests. The default host allocator will attempt to place guests of that OS type on such hosts first. If no such host is available, the allocator will place the instance wherever there is sufficient physical capacity.
Both vertical and horizontal allocation is allowed. Vertical allocation consumes all the resources of a given host before allocating any guests on a second host. This reduces power consumption in the cloud. Horizontal allocation places a guest on each host in a round-robin fashion. This may yield better performance to the guests in some cases. CloudStack also allows an element of CPU over-provisioning as configured by the administrator. Over-provisioning allows the administrator to commit more CPU cycles to the allocated guests than are actually available from the hardware.
CloudStack also provides a pluggable interface for adding new allocators. These custom allocators can provide any policy the administrator desires.

16.8.1. Over-Provisioning and Service Offering Limits

CloudStack performs CPU over-provisioning based on an over-provisioning ratio configured by the administrator. This is defined by the cpu.overprovisioning.factor global configuration variable.
CloudStack performs CPU over-provisioning based on an over-provisioning ratio configured by the administrator. This is defined by the cpu.overprovisioning.factor global configuration variable
Service offerings limits (e.g. 1 GHz, 1 core) are strictly enforced for core count. For example, a guest with a service offering of one core will have only one core available to it regardless of other activity on the Host.
Service offering limits for gigahertz are enforced only in the presence of contention for CPU resources. For example, suppose that a guest was created with a service offering of 1 GHz on a Host that has 2 GHz cores, and that guest is the only guest running on the Host. The guest will have the full 2 GHz available to it. When multiple guests are attempting to use the CPU a weighting factor is used to schedule CPU resources. The weight is based on the clock speed in the service offering. Guests receive a CPU allocation that is proportionate to the GHz in the service offering. For example, a guest created from a 2 GHz service offering will receive twice the CPU allocation as a guest created from a 1 GHz service offering. CloudStack does not perform memory over-provisioning.

16.9. VLAN Provisioning

CloudStack automatically creates and destroys interfaces bridged to VLANs on the hosts. In general the administrator does not need to manage this process.
CloudStack manages VLANs differently based on hypervisor type. For XenServer or KVM, the VLANs are created on only the hosts where they will be used and then they are destroyed when all guests that require them have been terminated or moved to another host.
For vSphere the VLANs are provisioned on all hosts in the cluster even if there is no guest running on a particular Host that requires the VLAN. This allows the administrator to perform live migration and other functions in vCenter without having to create the VLAN on the destination Host. Additionally, the VLANs are not removed from the Hosts when they are no longer needed.
You can use the same VLANs on different physical networks provided that each physical network has its own underlying layer-2 infrastructure, such as switches. For example, you can specify VLAN range 500 to 1000 while deploying physical networks A and B in an Advanced zone setup. This capability allows you to set up an additional layer-2 physical infrastructure on a different physical NIC and use the same set of VLANs if you run out of VLANs. Another advantage is that you can use the same set of IPs for different customers, each one with their own routers and the guest networks on different physical NICs.

Chapter 17. Working with Templates

A template is a reusable configuration for virtual machines. When users launch VMs, they can choose from a list of templates in CloudStack.
Specifically, a template is a virtual disk image that includes one of a variety of operating systems, optional additional software such as office applications, and settings such as access control to determine who can use the template. Each template is associated with a particular type of hypervisor, which is specified when the template is added to CloudStack.
CloudStack ships with a default template. In order to present more choices to users, CloudStack administrators and users can create templates and add them to CloudStack.

17.1. Creating Templates: Overview

CloudStack ships with a default template for the CentOS operating system. There are a variety of ways to add more templates. Administrators and end users can add templates. The typical sequence of events is:
  1. Launch a VM instance that has the operating system you want. Make any other desired configuration changes to the VM.
  2. Stop the VM.
  3. Convert the volume into a template.
There are other ways to add templates to CloudStack. For example, you can take a snapshot of the VM's volume and create a template from the snapshot, or import a VHD from another system into CloudStack.
The various techniques for creating templates are described in the next few sections.

17.2. Requirements for Templates

  • For XenServer, install PV drivers / Xen tools on each template that you create. This will enable live migration and clean guest shutdown.
  • For vSphere, install VMware Tools on each template that you create. This will enable console view to work properly.

17.3. Best Practices for Templates

If you plan to use large templates (100 GB or larger), be sure you have a 10-gigabit network to support the large templates. A slower network can lead to timeouts and other errors when large templates are used.

17.4. The Default Template

CloudStack includes a CentOS template. This template is downloaded by the Secondary Storage VM after the primary and secondary storage are configured. You can use this template in your production deployment or you can delete it and use custom templates.
The root password for the default template is "password".
A default template is provided for each of XenServer, KVM, and vSphere. The templates that are downloaded depend on the hypervisor type that is available in your cloud. Each template is approximately 2.5 GB physical size.
The default template includes the standard iptables rules, which will block most access to the template excluding ssh.
# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
RH-Firewall-1-INPUT  all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
RH-Firewall-1-INPUT  all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain RH-Firewall-1-INPUT (2 references)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere
ACCEPT     icmp --  anywhere        anywhere       icmp any
ACCEPT     esp  --  anywhere        anywhere
ACCEPT     ah   --  anywhere        anywhere
ACCEPT     udp  --  anywhere        224.0.0.251    udp dpt:mdns
ACCEPT     udp  --  anywhere        anywhere       udp dpt:ipp
ACCEPT     tcp  --  anywhere        anywhere       tcp dpt:ipp
ACCEPT     all  --  anywhere        anywhere       state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere        anywhere       state NEW tcp dpt:ssh
REJECT     all  --  anywhere        anywhere       reject-with icmp-host-

17.5. Private and Public Templates

When a user creates a template, it can be designated private or public.
Private templates are only available to the user who created them. By default, an uploaded template is private.
When a user marks a template as “public,” the template becomes available to all users in all accounts in the user's domain, as well as users in any other domains that have access to the Zone where the template is stored. This depends on whether the Zone, in turn, was defined as private or public. A private Zone is assigned to a single domain, and a public Zone is accessible to any domain. If a public template is created in a private Zone, it is available only to users in the domain assigned to that Zone. If a public template is created in a public Zone, it is available to all users in all domains.

17.6. Creating a Template from an Existing Virtual Machine

Once you have at least one VM set up in the way you want, you can use it as the prototype for other VMs.
  1. Create and start a virtual machine using any of the techniques given in Section 15.4, “Creating VMs”.
  2. Make any desired configuration changes on the running VM, then click Stop.
  3. Wait for the VM to stop. When the status shows Stopped, go to the next step.
  4. Click Create Template and provide the following:
    • Name and Display Text. These will be shown in the UI, so choose something descriptive.
    • OS Type. This helps CloudStack and the hypervisor perform certain operations and make assumptions that improve the performance of the guest. Select one of the following.
      • If the operating system of the stopped VM is listed, choose it.
      • If the OS type of the stopped VM is not listed, choose Other.
      • If you want to boot from this template in PV mode, choose Other PV (32-bit) or Other PV (64-bit). This choice is available only for XenServere:

        Note

        Note: Generally you should not choose an older version of the OS than the version in the image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will in general not work. In those cases you should choose Other.
    • Public. Choose Yes to make this template accessible to all users of this CloudStack installation. The template will appear in the Community Templates list. See Section 17.5, “Private and Public Templates”.
    • Password Enabled. Choose Yes if your template has the CloudStack password change script installed. See Section 17.13, “Adding Password Management to Your Templates”.
  5. Click Add.
The new template will be visible in the Templates section when the template creation process has been completed. The template is then available when creating a new VM.

17.7. Creating a Template from a Snapshot

If you do not want to stop the VM in order to use the Create Template menu item (as described in Section 17.6, “Creating a Template from an Existing Virtual Machine”), you can create a template directly from any snapshot through the CloudStack UI.

17.8. Uploading Templates

vSphere Templates and ISOs

If you are uploading a template that was created using vSphere Client, be sure the OVA file does not contain an ISO. If it does, the deployment of VMs from the template will fail.
Templates are uploaded based on a URL. HTTP is the supported access protocol. Templates are frequently large files. You can optionally gzip them to decrease upload times.
To upload a template:
  1. In the left navigation bar, click Templates.
  2. Click Register Template.
  3. Provide the following:
    • Name and Description. These will be shown in the UI, so choose something descriptive.
    • URL. The Management Server will download the file from the specified URL, such as http://my.web.server/filename.vhd.gz.
    • Zone. Choose the zone where you want the template to be available, or All Zones to make it available throughout CloudStack.
    • OS Type: This helps CloudStack and the hypervisor perform certain operations and make assumptions that improve the performance of the guest. Select one of the following:
      • If the operating system of the stopped VM is listed, choose it.
      • If the OS type of the stopped VM is not listed, choose Other.

        Note

        You should not choose an older version of the OS than the version in the image. For example, choosing CentOS 5.4 to support a CentOS 6.2 image will in general not work. In those cases you should choose Other.
    • Hypervisor: The supported hypervisors are listed. Select the desired one.
    • Format. The format of the template upload file, such as VHD or OVA.
    • Password Enabled. Choose Yes if your template has the CloudStack password change script installed. See Adding Password Management to Your Templates
    • Extractable. Choose Yes if the template is available for extraction. If this option is selected, end users can download a full image of a template.
    • Public. Choose Yes to make this template accessible to all users of this CloudStack installation. The template will appear in the Community Templates list. See Section 17.5, “Private and Public Templates”.
    • Featured. Choose Yes if you would like this template to be more prominent for users to select. The template will appear in the Featured Templates list. Only an administrator can make a template Featured.

17.9. Exporting Templates

End users and Administrators may export templates from the CloudStack. Navigate to the template in the UI and choose the Download function from the Actions menu.

17.10. Creating a Windows Template

Windows templates must be prepared with Sysprep before they can be provisioned on multiple machines. Sysprep allows you to create a generic Windows template and avoid any possible SID conflicts.

Note

(XenServer) Windows VMs running on XenServer require PV drivers, which may be provided in the template or added after the VM is created. The PV drivers are necessary for essential management functions such as mounting additional volumes and ISO images, live migration, and graceful shutdown.
An overview of the procedure is as follows:
  1. Upload your Windows ISO.
    For more information, see Section 15.11.1, “Adding an ISO”.
  2. Create a VM Instance with this ISO.
    For more information, see Section 15.4, “Creating VMs”.
  3. Follow the steps in Sysprep for Windows Server 2008 R2 (below) or Sysprep for Windows Server 2003 R2, depending on your version of Windows Server
  4. The preparation steps are complete. Now you can actually create the template as described in Creating the Windows Template.

17.10.1. System Preparation for Windows Server 2008 R2

For Windows 2008 R2, you run Windows System Image Manager to create a custom sysprep response XML file. Windows System Image Manager is installed as part of the Windows Automated Installation Kit (AIK). Windows AIK can be downloaded from Microsoft Download Center.
Use the following steps to run sysprep for Windows 2008 R2:

Note

The steps outlined here are derived from the excellent guide by Charity Shelbourne, originally published at Windows Server 2008 Sysprep Mini-Setup.
  1. Download and install the Windows AIK

    Note

    Windows AIK should not be installed on the Windows 2008 R2 VM you just created. Windows AIK should not be part of the template you create. It is only used to create the sysprep answer file.
  2. Copy the install.wim file in the \sources directory of the Windows 2008 R2 installation DVD to the hard disk. This is a very large file and may take a long time to copy. Windows AIK requires the WIM file to be writable.
  3. Start the Windows System Image Manager, which is part of the Windows AIK.
  4. In the Windows Image pane, right click the Select a Windows image or catalog file option to load the install.wim file you just copied.
  5. Select the Windows 2008 R2 Edition.
    You may be prompted with a warning that the catalog file cannot be opened. Click Yes to create a new catalog file.
  6. In the Answer File pane, right click to create a new answer file.
  7. Generate the answer file from the Windows System Image Manager using the following steps:
    1. The first page you need to automate is the Language and Country or Region Selection page. To automate this, expand Components in your Windows Image pane, right-click and add the Microsoft-Windows-International-Core setting to Pass 7 oobeSystem. In your Answer File pane, configure the InputLocale, SystemLocale, UILanguage, and UserLocale with the appropriate settings for your language and country or region. Should you have a question about any of these settings, you can right-click on the specific setting and select Help. This will open the appropriate CHM help file with more information, including examples on the setting you are attempting to configure.
      sysmanager.png: System Image Manager
    2. You need to automate the Software License Terms Selection page, otherwise known as the End-User License Agreement (EULA). To do this, expand the Microsoft-Windows-Shell-Setup component. High-light the OOBE setting, and add the setting to the Pass 7 oobeSystem. In Settings, set HideEULAPage true.
      software-license.png: Depicts hiding the EULA page.
    3. Make sure the license key is properly set. If you use MAK key, you can just enter the MAK key on the Windows 2008 R2 VM. You need not input the MAK into the Windows System Image Manager. If you use KMS host for activation you need not enter the Product Key. Details of Windows Volume Activation can be found at http://technet.microsoft.com/en-us/library/bb892849.aspx
    4. You need to automate is the Change Administrator Password page. Expand the Microsoft-Windows-Shell-Setup component (if it is not still expanded), expand UserAccounts, right-click on AdministratorPassword, and add the setting to the Pass 7 oobeSystem configuration pass of your answer file. Under Settings, specify a password next to Value.
      change-admin-password.png: Depicts changing the administrator password
      You may read the AIK documentation and set many more options that suit your deployment. The steps above are the minimum needed to make Windows unattended setup work.
  8. Save the answer file as unattend.xml. You can ignore the warning messages that appear in the validation window.
  9. Copy the unattend.xml file into the c:\windows\system32\sysprep directory of the Windows 2008 R2 Virtual Machine
  10. Once you place the unattend.xml file in c:\windows\system32\sysprep directory, you run the sysprep tool as follows:
    cd c:\Windows\System32\sysprep
    sysprep.exe /oobe /generalize /shutdown
    
    The Windows 2008 R2 VM will automatically shut down after sysprep is complete.

17.10.2. System Preparation for Windows Server 2003 R2

Earlier versions of Windows have a different sysprep tool. Follow these steps for Windows Server 2003 R2.
  1. Extract the content of \support\tools\deploy.cab on the Windows installation CD into a directory called c:\sysprep on the Windows 2003 R2 VM.
  2. Run c:\sysprep\setupmgr.exe to create the sysprep.inf file.
    1. Select Create New to create a new Answer File.
    2. Enter “Sysprep setup” for the Type of Setup.
    3. Select the appropriate OS version and edition.
    4. On the License Agreement screen, select “Yes fully automate the installation”.
    5. Provide your name and organization.
    6. Leave display settings at default.
    7. Set the appropriate time zone.
    8. Provide your product key.
    9. Select an appropriate license mode for your deployment
    10. Select “Automatically generate computer name”.
    11. Type a default administrator password. If you enable the password reset feature, the users will not actually use this password. This password will be reset by the instance manager after the guest boots up.
    12. Leave Network Components at “Typical Settings”.
    13. Select the “WORKGROUP” option.
    14. Leave Telephony options at default.
    15. Select appropriate Regional Settings.
    16. Select appropriate language settings.
    17. Do not install printers.
    18. Do not specify “Run Once commands”.
    19. You need not specify an identification string.
    20. Save the Answer File as c:\sysprep\sysprep.inf.
  3. Run the following command to sysprep the image:
    c:\sysprep\sysprep.exe -reseal -mini -activated
    After this step the machine will automatically shut down

17.11. Importing Amazon Machine Images

The following procedures describe how to import an Amazon Machine Image (AMI) into CloudStack when using the XenServer hypervisor.
Assume you have an AMI file and this file is called CentOS_6.2_x64. Assume further that you are working on a CentOS host. If the AMI is a Fedora image, you need to be working on a Fedora host initially.
You need to have a XenServer host with a file-based storage repository (either a local ext3 SR or an NFS SR) to convert to a VHD once the image file has been customized on the Centos/Fedora host.

Note

When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
To import an AMI:
  1. Set up loopback on image file:
    # mkdir -p /mnt/loop/centos62
    # mount -o loop  CentOS_6.2_x64 /mnt/loop/centos54
    
  2. Install the kernel-xen package into the image. This downloads the PV kernel and ramdisk to the image.
    # yum -c /mnt/loop/centos54/etc/yum.conf --installroot=/mnt/loop/centos62/ -y install kernel-xen
  3. Create a grub entry in /boot/grub/grub.conf.
    # mkdir -p /mnt/loop/centos62/boot/grub
    # touch /mnt/loop/centos62/boot/grub/grub.conf
    # echo "" > /mnt/loop/centos62/boot/grub/grub.conf
    
  4. Determine the name of the PV kernel that has been installed into the image.
    # cd /mnt/loop/centos62
    # ls lib/modules/
    2.6.16.33-xenU  2.6.16-xenU  2.6.18-164.15.1.el5xen  2.6.18-164.6.1.el5.centos.plus  2.6.18-xenU-ec2-v1.0  2.6.21.7-2.fc8xen  2.6.31-302-ec2
    # ls boot/initrd*
    boot/initrd-2.6.18-164.6.1.el5.centos.plus.img boot/initrd-2.6.18-164.15.1.el5xen.img
    # ls boot/vmlinuz*
    boot/vmlinuz-2.6.18-164.15.1.el5xen  boot/vmlinuz-2.6.18-164.6.1.el5.centos.plus  boot/vmlinuz-2.6.18-xenU-ec2-v1.0  boot/vmlinuz-2.6.21-2952.fc8xen
    
    Xen kernels/ramdisk always end with "xen". For the kernel version you choose, there has to be an entry for that version under lib/modules, there has to be an initrd and vmlinuz corresponding to that. Above, the only kernel that satisfies this condition is 2.6.18-164.15.1.el5xen.
  5. Based on your findings, create an entry in the grub.conf file. Below is an example entry.
    default=0
    timeout=5
    hiddenmenu
    title CentOS (2.6.18-164.15.1.el5xen)
            root (hd0,0)
            kernel /boot/vmlinuz-2.6.18-164.15.1.el5xen ro root=/dev/xvda 
            initrd /boot/initrd-2.6.18-164.15.1.el5xen.img
    
  6. Edit etc/fstab, changing “sda1” to “xvda” and changing “sdb” to “xvdb”.
    # cat etc/fstab
    /dev/xvda  /         ext3    defaults        1 1
    /dev/xvdb  /mnt      ext3    defaults        0 0
    none       /dev/pts  devpts  gid=5,mode=620  0 0
    none       /proc     proc    defaults        0 0
    none       /sys      sysfs   defaults        0 0
    
  7. Enable login via the console. The default console device in a XenServer system is xvc0. Ensure that etc/inittab and etc/securetty have the following lines respectively:
    # grep xvc0 etc/inittab 
    co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
    # grep xvc0 etc/securetty 
    xvc0
    
  8. Ensure the ramdisk supports PV disk and PV network. Customize this for the kernel version you have determined above.
    # chroot /mnt/loop/centos54
    # cd /boot/
    # mv initrd-2.6.18-164.15.1.el5xen.img initrd-2.6.18-164.15.1.el5xen.img.bak
    # mkinitrd -f /boot/initrd-2.6.18-164.15.1.el5xen.img --with=xennet --preload=xenblk --omit-scsi-modules 2.6.18-164.15.1.el5xen
    
  9. Change the password.
    # passwd
    Changing password for user root.
    New UNIX password: 
    Retype new UNIX password: 
    passwd: all authentication tokens updated successfully.
    
  10. Exit out of chroot.
    # exit
  11. Check etc/ssh/sshd_config for lines allowing ssh login using a password.
    # egrep "PermitRootLogin|PasswordAuthentication" /mnt/loop/centos54/etc/ssh/sshd_config  
    PermitRootLogin yes
    PasswordAuthentication yes
    
  12. If you need the template to be enabled to reset passwords from the CloudStack UI or API, install the password change script into the image at this point. See Section 17.13, “Adding Password Management to Your Templates”.
  13. Unmount and delete loopback mount.
    # umount /mnt/loop/centos54
    # losetup -d /dev/loop0
    
  14. Copy the image file to your XenServer host's file-based storage repository. In the example below, the Xenserver is "xenhost". This XenServer has an NFS repository whose uuid is a9c5b8c8-536b-a193-a6dc-51af3e5ff799.
    # scp CentOS_6.2_x64 xenhost:/var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799/
  15. Log in to the Xenserver and create a VDI the same size as the image.
    [root@xenhost ~]# cd /var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]#  ls -lh CentOS_6.2_x64
    -rw-r--r-- 1 root root 10G Mar 16 16:49 CentOS_6.2_x64
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-create virtual-size=10GiB sr-uuid=a9c5b8c8-536b-a193-a6dc-51af3e5ff799 type=user name-label="Centos 6.2 x86_64"
    cad7317c-258b-4ef7-b207-cdf0283a7923
    
  16. Import the image file into the VDI. This may take 10–20 minutes.
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-import filename=CentOS_6.2_x64 uuid=cad7317c-258b-4ef7-b207-cdf0283a7923
  17. Locate a the VHD file. This is the file with the VDI’s UUID as its name. Compress it and upload it to your web server.
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# bzip2 -c cad7317c-258b-4ef7-b207-cdf0283a7923.vhd > CentOS_6.2_x64.vhd.bz2
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# scp CentOS_6.2_x64.vhd.bz2 webserver:/var/www/html/templates/
    

17.12. Converting a Hyper-V VM to a Template

To convert a Hyper-V VM to a XenServer-compatible CloudStack template, you will need a standalone XenServer host with an attached NFS VHD SR. Use whatever XenServer version you are using with CloudStack, but use XenCenter 5.6 FP1 or SP2 (it is backwards compatible to 5.6). Additionally, it may help to have an attached NFS ISO SR.
For Linux VMs, you may need to do some preparation in Hyper-V before trying to get the VM to work in XenServer. Clone the VM and work on the clone if you still want to use the VM in Hyper-V. Uninstall Hyper-V Integration Components and check for any references to device names in /etc/fstab:
  1. From the linux_ic/drivers/dist directory, run make uninstall (where "linux_ic" is the path to the copied Hyper-V Integration Components files).
  2. Restore the original initrd from backup in /boot/ (the backup is named *.backup0).
  3. Remove the "hdX=noprobe" entries from /boot/grub/menu.lst.
  4. Check /etc/fstab for any partitions mounted by device name. Change those entries (if any) to mount by LABEL or UUID. You can get that information with the blkid command.
The next step is make sure the VM is not running in Hyper-V, then get the VHD into XenServer. There are two options for doing this.
Option one:
  1. Import the VHD using XenCenter. In XenCenter, go to Tools>Virtual Appliance Tools>Disk Image Import.
  2. Choose the VHD, then click Next.
  3. Name the VM, choose the NFS VHD SR under Storage, enable "Run Operating System Fixups" and choose the NFS ISO SR.
  4. Click Next, then Finish. A VM should be created.
Option two:
  1. Run XenConvert, under From choose VHD, under To choose XenServer. Click Next.
  2. Choose the VHD, then click Next.
  3. Input the XenServer host info, then click Next.
  4. Name the VM, then click Next, then Convert. A VM should be created.
Once you have a VM created from the Hyper-V VHD, prepare it using the following steps:
  1. Boot the VM, uninstall Hyper-V Integration Services, and reboot.
  2. Install XenServer Tools, then reboot.
  3. Prepare the VM as desired. For example, run sysprep on Windows VMs. See Section 17.10, “Creating a Windows Template”.
Either option above will create a VM in HVM mode. This is fine for Windows VMs, but Linux VMs may not perform optimally. Converting a Linux VM to PV mode will require additional steps and will vary by distribution.
  1. Shut down the VM and copy the VHD from the NFS storage to a web server; for example, mount the NFS share on the web server and copy it, or from the XenServer host use sftp or scp to upload it to the web server.
  2. In CloudStack, create a new template using the following values:
    • URL. Give the URL for the VHD
    • OS Type. Use the appropriate OS. For PV mode on CentOS, choose Other PV (32-bit) or Other PV (64-bit). This choice is available only for XenServer.
    • Hypervisor. XenServer
    • Format. VHD
The template will be created, and you can create instances from it.

17.13. Adding Password Management to Your Templates

CloudStack provides an optional password reset feature that allows users to set a temporary admin or root password as well as reset the existing admin or root password from the CloudStack UI.
To enable the Reset Password feature, you will need to download an additional script to patch your template. When you later upload the template into CloudStack, you can specify whether reset admin/root password feature should be enabled for this template.
The password management feature works always resets the account password on instance boot. The script does an HTTP call to the virtual router to retrieve the account password that should be set. As long as the virtual router is accessible the guest will have access to the account password that should be used. When the user requests a password reset the management server generates and sends a new password to the virtual router for the account. Thus an instance reboot is necessary to effect any password changes.
If the script is unable to contact the virtual router during instance boot it will not set the password but boot will continue normally.

17.13.1. Linux OS Installation

Use the following steps to begin the Linux OS installation:
  1. Download the script file cloud-set-guest-password:
  2. Copy this file to /etc/init.d.
    On some Linux distributions, copy the file to /etc/rc.d/init.d.
  3. Run the following command to make the script executable:
    chmod +x /etc/init.d/cloud-set-guest-password
  4. Depending on the Linux distribution, continue with the appropriate step.
    On Fedora, CentOS/RHEL, and Debian, run:
    chkconfig --add cloud-set-guest-password

17.13.2. Windows OS Installation

Download the installer, CloudInstanceManager.msi, from Download page and run the installer in the newly created Windows VM.

17.14. Deleting Templates

Templates may be deleted. In general, when a template spans multiple Zones, only the copy that is selected for deletion will be deleted; the same template in other Zones will not be deleted. The provided CentOS template is an exception to this. If the provided CentOS template is deleted, it will be deleted from all Zones.
When templates are deleted, the VMs instantiated from them will continue to run. However, new VMs cannot be created based on the deleted template.

Chapter 18. Working With Storage

18.1. Storage Overview

CloudStack defines two types of storage: primary and secondary. Primary storage can be accessed by either iSCSI or NFS. Additionally, direct attached storage may be used for primary storage. Secondary storage is always accessed using NFS.
There is no ephemeral storage in CloudStack. All volumes on all nodes are persistent.

18.2. Primary Storage

This section gives concepts and technical details about CloudStack primary storage. For information about how to install and configure primary storage through the CloudStack UI, see the Installation Guide.

18.2.1. Best Practices for Primary Storage

  • The speed of primary storage will impact guest performance. If possible, choose smaller, higher RPM drives for primary storage.
  • Ensure that nothing is stored on the server. Adding the server to CloudStack will destroy any existing data

18.2.2. Runtime Behavior of Primary Storage

Root volumes are created automatically when a virtual machine is created. Root volumes are deleted when the VM is destroyed. Data volumes can be created and dynamically attached to VMs. Data volumes are not deleted when VMs are destroyed.
Administrators should monitor the capacity of primary storage devices and add additional primary storage as needed. See the Advanced Installation Guide.
Administrators add primary storage to the system by creating a CloudStack storage pool. Each storage pool is associated with a cluster.

18.2.3. Hypervisor Support for Primary Storage

The following table shows storage options and parameters for different hypervisors.
VMware vSphere
Citrix XenServer
KVM
 
Format for Disks, Templates, and Snapshots
VMDK
VHD
QCOW2
 
iSCSI support
VMFS
Clustered LVM
Yes, via Shared Mountpoint
 
Fiber Channel support
VMFS
Yes, via Existing SR
Yes, via Shared Mountpoint
 
NFS support
Y
Y
Y
 
Local storage support
Y
Y
Y
 
Storage over-provisioning
NFS and iSCSI
NFS
NFS
 
XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the CloudStack can still support storage over-provisioning by running on thin-provisioned storage volumes.
KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the CloudStack does not attempt to mount or unmount the storage as is done with NFS. The CloudStack requires that the administrator insure that the storage is available
With NFS storage, CloudStack manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type.
Local storage is an option for primary storage for vSphere, XenServer, and KVM. When the local disk option is enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration.
CloudStack supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity.

18.2.4. Storage Tags

Storage may be "tagged". A tag is a text string attribute associated with primary storage, a Disk Offering, or a Service Offering. Tags allow administrators to provide additional information about the storage. For example, that is a "SSD" or it is "slow". Tags are not interpreted by CloudStack. They are matched against tags placed on service and disk offerings. CloudStack requires all tags on service and disk offerings to exist on the primary storage before it allocates root or data disks on the primary storage. Service and disk offering tags are used to identify the requirements of the storage that those offerings have. For example, the high end service offering may require "fast" for its root disk volume.
The interaction between tags, allocation, and volume copying across clusters and pods can be complex. To simplify the situation, use the same set of tags on the primary storage for all clusters in a pod. Even if different devices are used to present those tags, the set of exposed tags can be the same.

18.2.5. Maintenance Mode for Primary Storage

Primary storage may be placed into maintenance mode. This is useful, for example, to replace faulty RAM in a storage device. Maintenance mode for a storage device will first stop any new guests from being provisioned on the storage device. Then it will stop all guests that have any volume on that storage device. When all such guests are stopped the storage device is in maintenance mode and may be shut down. When the storage device is online again you may cancel maintenance mode for the device. The CloudStack will bring the device back online and attempt to start all guests that were running at the time of the entry into maintenance mode.

18.3. Secondary Storage

This section gives concepts and technical details about CloudStack secondary storage. For information about how to install and configure secondary storage through the CloudStack UI, see the Advanced Installation Guide.

18.4. Working With Volumes

A volume provides storage to a guest VM. The volume can provide for a root disk or an additional data disk. CloudStack supports additional volumes for guest VMs.
Volumes are created for a specific hypervisor type. A volume that has been attached to guest using one hypervisor type (e.g, XenServer) may not be attached to a guest that is using another hypervisor type, for example:vSphere, KVM. This is because the different hypervisors use different disk image formats.
CloudStack defines a volume as a unit of storage available to a guest VM. Volumes are either root disks or data disks. The root disk has "/" in the file system and is usually the boot device. Data disks provide for additional storage, for example: "/opt" or "D:". Every guest VM has a root disk, and VMs can also optionally have a data disk. End users can mount multiple data disks to guest VMs. Users choose data disks from the disk offerings created by administrators. The user can create a template from a volume as well; this is the standard procedure for private template creation. Volumes are hypervisor-specific: a volume from one hypervisor type may not be used on a guest of another hypervisor type.

Note

CloudStack supports attaching up to 13 data disks to a VM on XenServer hypervisor versions 6.0 and above. For the VMs on other hypervisor types, the data disk limit is 6.

18.4.1. Creating a New Volume

You can add more data disk volumes to a guest VM at any time, up to the limits of your storage capacity. Both CloudStack administrators and users can add volumes to VM instances. When you create a new volume, it is stored as an entity in CloudStack, but the actual storage resources are not allocated on the physical storage device until you attach the volume. This optimization allows the CloudStack to provision the volume nearest to the guest that will use it when the first attachment is made.

18.4.1.1. Using Local Storage for Data Volumes

You can create data volumes on local storage (supported with XenServer, KVM, and VMware). The data volume is placed on the same host as the VM instance that is attached to the data volume. These local data volumes can be attached to virtual machines, detached, re-attached, and deleted just as with the other types of data volume.
Local storage is ideal for scenarios where persistence of data volumes and HA is not required. Some of the benefits include reduced disk I/O latency and cost reduction from using inexpensive local disks.
In order for local volumes to be used, the feature must be enabled for the zone.
You can create a data disk offering for local storage. When a user creates a new VM, they can select this disk offering in order to cause the data disk volume to be placed in local storage.
You can not migrate a VM that has a volume in local storage to a different host, nor migrate the volume itself away to a different host. If you want to put a host into maintenance mode, you must first stop any VMs with local data volumes on that host.

18.4.1.2. To Create a New Volume

  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation bar, click Storage.
  3. In Select View, choose Volumes.
  4. To create a new volume, click Add Volume, provide the following details, and click OK.
    • Name. Give the volume a unique name so you can find it later.
    • Availability Zone. Where do you want the storage to reside? This should be close to the VM that will use the volume.
    • Disk Offering. Choose the characteristics of the storage.
    The new volume appears in the list of volumes with the state “Allocated.” The volume data is stored in CloudStack, but the volume is not yet ready for use
  5. To start using the volume, continue to Attaching a Volume

18.4.2. Uploading an Existing Volume to a Virtual Machine

Existing data can be made accessible to a virtual machine. This is called uploading a volume to the VM. For example, this is useful to upload data from a local file system and attach it to a VM. Root administrators, domain administrators, and end users can all upload existing volumes to VMs.
The upload is performed using HTTP. The uploaded volume is placed in the zone's secondary storage
You cannot upload a volume if the preconfigured volume limit has already been reached. The default limit for the cloud is set in the global configuration parameter max.account.volumes, but administrators can also set per-domain limits that are different from the global default. See Setting Usage Limits
To upload a volume:
  1. (Optional) Create an MD5 hash (checksum) of the disk image file that you are going to upload. After uploading the data disk, CloudStack will use this value to verify that no data corruption has occurred.
  2. Log in to the CloudStack UI as an administrator or user
  3. In the left navigation bar, click Storage.
  4. Click Upload Volume.
  5. Provide the following:
    • Name and Description. Any desired name and a brief description that can be shown in the UI.
    • Availability Zone. Choose the zone where you want to store the volume. VMs running on hosts in this zone can attach the volume.
    • Format. Choose one of the following to indicate the disk image format of the volume.
      Hypervisor
      Disk Image Format
      XenServer
      VHD
      VMware
      OVA
      KVM
      QCOW2
    • URL. The secure HTTP or HTTPS URL that CloudStack can use to access your disk. The type of file at the URL must match the value chosen in Format. For example, if Format is VHD, the URL might look like the following:
      http://yourFileServerIP/userdata/myDataDisk.vhd
    • MD5 checksum. (Optional) Use the hash that you created in step 1.
  6. Wait until the status of the volume shows that the upload is complete. Click Instances - Volumes, find the name you specified in step ???, and make sure the status is Uploaded.

18.4.3. Attaching a Volume

You can attach a volume to a guest VM to provide extra disk storage. Attach a volume when you first create a new volume, when you are moving an existing volume from one VM to another, or after you have migrated a volume from one storage pool to another.
  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation, click Storage.
  3. In Select View, choose Volumes.
  4. 4. Click the volume name in the Volumes list, then click the Attach Disk button AttachDiskButton.png: button to attach a volume
  5. In the Instance popup, choose the VM to which you want to attach the volume. You will only see instances to which you are allowed to attach volumes; for example, a user will see only instances created by that user, but the administrator will have more choices.
  6. When the volume has been attached, you should be able to see it by clicking Instances, the instance name, and View Volumes.

18.4.4. Detaching and Moving Volumes

Note

This procedure is different from moving disk volumes from one storage pool to another. See VM Storage Migration
A volume can be detached from a guest VM and attached to another guest. Both CloudStack administrators and users can detach volumes from VMs and move them to other VMs.
If the two VMs are in different clusters, and the volume is large, it may take several minutes for the volume to be moved to the new VM.
  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation bar, click Storage, and choose Volumes in Select View. Alternatively, if you know which VM the volume is attached to, you can click Instances, click the VM name, and click View Volumes.
  3. Click the name of the volume you want to detach, then click the Detach Disk button. DetachDiskButton.png: button to detach a volume
  4. To move the volume to another VM, follow the steps in Section 18.4.3, “Attaching a Volume”.

18.4.5. VM Storage Migration

Supported in XenServer, KVM, and VMware.

Note

This procedure is different from moving disk volumes from one VM to another. See Detaching and Moving Volumes Section 18.4.4, “Detaching and Moving Volumes”.
You can migrate a virtual machine’s root disk volume or any additional data disk volume from one storage pool to another in the same zone.
You can use the storage migration feature to achieve some commonly desired administration goals, such as balancing the load on storage pools and increasing the reliability of virtual machines by moving them away from any storage pool that is experiencing issues.

18.4.5.1. Migrating a Data Disk Volume to a New Storage Pool

  1. Log in to the CloudStack UI as a user or admin.
  2. Detach the data disk from the VM. See Detaching and Moving Volumes Section 18.4.4, “Detaching and Moving Volumes” (but skip the “reattach” step at the end. You will do that after migrating to new storage).
  3. Call the CloudStack API command migrateVolume and pass in the volume ID and the ID of any storage pool in the zone.
  4. Watch for the volume status to change to Migrating, then back to Ready.
  5. Attach the volume to any desired VM running in the same cluster as the new storage server. See Attaching a Volume Section 18.4.3, “Attaching a Volume”

18.4.5.2. Migrating a VM Root Volume to a New Storage Pool

When migrating the root disk volume, the VM must first be stopped, and users can not access the VM. After migration is complete, the VM can be restarted.
  1. Log in to the CloudStack UI as a user or admin.
  2. Detach the data disk from the VM. See Detaching and Moving Volumes Section 18.4.4, “Detaching and Moving Volumes” (but skip the “reattach” step at the end. You will do that after migrating to new storage).
  3. Stop the VM.
  4. Use the CloudStack API command, migrateVirtualMachine, with the ID of the VM to migrate and the IDs of a destination host and destination storage pool in the same zone.
  5. Watch for the VM status to change to Migrating, then back to Stopped.
  6. Restart the VM.

18.4.6. Resizing Volumes

CloudStack provides the ability to resize data disks; CloudStack controls volume size by using disk offerings. This provides CloudStack administrators with the flexibility to choose how much space they want to make available to the end users. Volumes within the disk offerings with the same storage tag can be resized. For example, if you only want to offer 10, 50, and 100 GB offerings, the allowed resize should stay within those limits. That implies if you define a 10 GB, a 50 GB and a 100 GB disk offerings, a user can upgrade from 10 GB to 50 GB, or 50 GB to 100 GB. If you create a custom-sized disk offering, then you have the option to resize the volume by specifying a new, larger size.
Additionally, using the resizeVolume API, a data volume can be moved from a static disk offering to a custom disk offering with the size specified. This functionality allows those who might be billing by certain volume sizes or disk offerings to stick to that model, while providing the flexibility to migrate to whatever custom size necessary.
This feature is supported on KVM, XenServer, and VMware hosts. However, shrinking volumes is not supported on VMware hosts.
Before you try to resize a volume, consider the following:
  • The VMs associated with the volume are stopped.
  • The data disks associated with the volume are removed.
  • When a volume is shrunk, the disk associated with it is simply truncated, and doing so would put its content at risk of data loss. Therefore, resize any partitions or file systems before you shrink a data disk so that all the data is moved off from that disk.
To resize a volume:
  1. Log in to the CloudStack UI as a user or admin.
  2. In the left navigation bar, click Storage.
  3. In Select View, choose Volumes.
  4. Select the volume name in the Volumes list, then click the Resize Volume button resize-volume-icon.png: button to display the resize volume option.
  5. In the Resize Volume pop-up, choose desired characteristics for the storage.
    resize-volume.png: option to resize a volume.
    1. If you select Custom Disk, specify a custom size.
    2. Click Shrink OK to confirm that you are reducing the size of a volume.
      This parameter protects against inadvertent shrinking of a disk, which might lead to the risk of data loss. You must sign off that you know what you are doing.
  6. Click OK.

18.4.7. Volume Deletion and Garbage Collection

The deletion of a volume does not delete the snapshots that have been created from the volume
When a VM is destroyed, data disk volumes that are attached to the VM are not deleted.
Volumes are permanently destroyed using a garbage collection process. The global configuration variables expunge.delay and expunge.interval determine when the physical deletion of volumes will occur.
  • expunge.delay: determines how old the volume must be before it is destroyed, in seconds
  • expunge.interval: determines how often to run the garbage collection check
Administrators should adjust these values depending on site policies around data retention.

18.5. Working with Snapshots

(Supported for the following hypervisors: XenServer, VMware vSphere, and KVM)
CloudStack supports snapshots of disk volumes. Snapshots are a point-in-time capture of virtual machine disks. Memory and CPU states are not captured.
Snapshots may be taken for volumes, including both root and data disks. The administrator places a limit on the number of stored snapshots per user. Users can create new volumes from the snapshot for recovery of particular files and they can create templates from snapshots to boot from a restored disk.
Users can create snapshots manually or by setting up automatic recurring snapshot policies. Users can also create disk volumes from snapshots, which may be attached to a VM like any other disk volume. Snapshots of both root disks and data disks are supported. However, CloudStack does not currently support booting a VM from a recovered root disk. A disk recovered from snapshot of a root disk is treated as a regular data disk; the data on recovered disk can be accessed by attaching the disk to a VM.
A completed snapshot is copied from primary storage to secondary storage, where it is stored until deleted or purged by newer snapshot.

18.5.1. Snapshot Job Throttling

When a snapshot of a virtual machine is requested, the snapshot job runs on the same host where the VM is running or, in the case of a stopped VM, the host where it ran last. If many snapshots are requested for VMs on a single host, this can lead to problems with too many snapshot jobs overwhelming the resources of the host.
To address this situation, the cloud's root administrator can throttle how many snapshot jobs are executed simultaneously on the hosts in the cloud by using the global configuration setting concurrent.snapshots.threshold.perhost. By using this setting, the administrator can better ensure that snapshot jobs do not time out and hypervisor hosts do not experience performance issues due to hosts being overloaded with too many snapshot requests.
Set concurrent.snapshots.threshold.perhost to a value that represents a best guess about how many snapshot jobs the hypervisor hosts can execute at one time, given the current resources of the hosts and the number of VMs running on the hosts. If a given host has more snapshot requests, the additional requests are placed in a waiting queue. No new snapshot jobs will start until the number of currently executing snapshot jobs falls below the configured limit.
The admin can also set job.expire.minutes to place a maximum on how long a snapshot request will wait in the queue. If this limit is reached, the snapshot request fails and returns an error message.

18.5.2. Automatic Snapshot Creation and Retention

(Supported for the following hypervisors: XenServer, VMware vSphere, and KVM)
Users can set up a recurring snapshot policy to automatically create multiple snapshots of a disk at regular intervals. Snapshots can be created on an hourly, daily, weekly, or monthly interval. One snapshot policy can be set up per disk volume. For example, a user can set up a daily snapshot at 02:30.
With each snapshot schedule, users can also specify the number of scheduled snapshots to be retained. Older snapshots that exceed the retention limit are automatically deleted. This user-defined limit must be equal to or lower than the global limit set by the CloudStack administrator. See Section 19.3, “Globally Configured Limits”. The limit applies only to those snapshots that are taken as part of an automatic recurring snapshot policy. Additional manual snapshots can be created and retained.

18.5.3. Incremental Snapshots and Backup

Snapshots are created on primary storage where a disk resides. After a snapshot is created, it is immediately backed up to secondary storage and removed from primary storage for optimal utilization of space on primary storage.
CloudStack does incremental backups for some hypervisors. When incremental backups are supported, every N backup is a full backup.
VMware vSphere
Citrix XenServer
KVM
Support incremental backup
N
Y
N

18.5.4. Volume Status

When a snapshot operation is triggered by means of a recurring snapshot policy, a snapshot is skipped if a volume has remained inactive since its last snapshot was taken. A volume is considered to be inactive if it is either detached or attached to a VM that is not running. CloudStack ensures that at least one snapshot is taken since the volume last became inactive.
When a snapshot is taken manually, a snapshot is always created regardless of whether a volume has been active or not.

18.5.5. Snapshot Restore

There are two paths to restoring snapshots. Users can create a volume from the snapshot. The volume can then be mounted to a VM and files recovered as needed. Alternatively, a template may be created from the snapshot of a root disk. The user can then boot a VM from this template to effect recovery of the root disk.

Chapter 19. Working with Usage

The Usage Server is an optional, separately-installed part of CloudStack that provides aggregated usage records which you can use to create billing integration for CloudStack. The Usage Server works by taking data from the events log and creating summary usage records that you can access using the listUsageRecords API call.
The usage records show the amount of resources, such as VM run time or template storage space, consumed by guest instances.
The Usage Server runs at least once per day. It can be configured to run multiple times per day.

19.1. Configuring the Usage Server

To configure the usage server:
  1. Be sure the Usage Server has been installed. This requires extra steps beyond just installing the CloudStack software. See Installing the Usage Server (Optional) in the Advanced Installation Guide.
  2. Log in to the CloudStack UI as administrator.
  3. Click Global Settings.
  4. In Search, type usage. Find the configuration parameter that controls the behavior you want to set. See the table below for a description of the available parameters.
  5. In Actions, click the Edit icon.
  6. Type the desired value and click the Save icon.
  7. Restart the Management Server (as usual with any global configuration change) and also the Usage Server:
    # service cloud-management restart
    # service cloud-usage restart
    
The following table shows the global configuration settings that control the behavior of the Usage Server.
Parameter Name
Description
enable.usage.server
Whether the Usage Server is active.
usage.aggregation.timezone
Time zone of usage records. Set this if the usage records and daily job execution are in different time zones. For example, with the following settings, the usage job will run at PST 00:15 and generate usage records for the 24 hours from 00:00:00 GMT to 23:59:59 GMT:
usage.stats.job.exec.time = 00:15	
usage.execution.timezone = PST
usage.aggregation.timezone = GMT
Valid values for the time zone are specified in Appendix A, Time Zones
Default: GMT
usage.execution.timezone
The time zone of usage.stats.job.exec.time. Valid values for the time zone are specified in Appendix A, Time Zones
Default: The time zone of the management server.
usage.sanity.check.interval
The number of days between sanity checks. Set this in order to periodically search for records with erroneous data before issuing customer invoices. For example, this checks for VM usage records created after the VM was destroyed, and similar checks for templates, volumes, and so on. It also checks for usage times longer than the aggregation range. If any issue is found, the alert ALERT_TYPE_USAGE_SANITY_RESULT = 21 is sent.
usage.stats.job.aggregation.range
The time period in minutes between Usage Server processing jobs. For example, if you set it to 1440, the Usage Server will run once per day. If you set it to 600, it will run every ten hours. In general, when a Usage Server job runs, it processes all events generated since usage was last run.
There is special handling for the case of 1440 (once per day). In this case the Usage Server does not necessarily process all records since Usage was last run. CloudStack assumes that you require processing once per day for the previous, complete day’s records. For example, if the current day is October 7, then it is assumed you would like to process records for October 6, from midnight to midnight. CloudStack assumes this “midnight to midnight” is relative to the usage.execution.timezone.
Default: 1440
usage.stats.job.exec.time
The time when the Usage Server processing will start. It is specified in 24-hour format (HH:MM) in the time zone of the server, which should be GMT. For example, to start the Usage job at 10:30 GMT, enter “10:30”.
If usage.stats.job.aggregation.range is also set, and its value is not 1440, then its value will be added to usage.stats.job.exec.time to get the time to run the Usage Server job again. This is repeated until 24 hours have elapsed, and the next day's processing begins again at usage.stats.job.exec.time.
Default: 00:15.
For example, suppose that your server is in GMT, your user population is predominantly in the East Coast of the United States, and you would like to process usage records every night at 2 AM local (EST) time. Choose these settings:
  • enable.usage.server = true
  • usage.execution.timezone = America/New_York
  • usage.stats.job.exec.time = 07:00. This will run the Usage job at 2:00 AM EST. Note that this will shift by an hour as the East Coast of the U.S. enters and exits Daylight Savings Time.
  • usage.stats.job.aggregation.range = 1440
With this configuration, the Usage job will run every night at 2 AM EST and will process records for the previous day’s midnight-midnight as defined by the EST (America/New_York) time zone.

Note

Because the special value 1440 has been used for usage.stats.job.aggregation.range, the Usage Server will ignore the data between midnight and 2 AM. That data will be included in the next day's run.

19.2. Setting Usage Limits

CloudStack provides several administrator control points for capping resource usage by users. Some of these limits are global configuration parameters. Others are applied at the ROOT domain and may be overridden on a per-account basis.
Aggregate limits may be set on a per-domain basis. For example, you may limit a domain and all subdomains to the creation of 100 VMs.
This section covers the following topics:

19.3. Globally Configured Limits

In a zone, the guest virtual network has a 24 bit CIDR by default. This limits the guest virtual network to 254 running instances. It can be adjusted as needed, but this must be done before any instances are created in the zone. For example, 10.1.1.0/22 would provide for ~1000 addresses.
The following table lists limits set in the Global Configuration:
Parameter Name
Definition
max.account.public.ips
Number of public IP addresses that can be owned by an account
max.account.snapshots
Number of snapshots that can exist for an account
max.account.templates
Number of templates that can exist for an account
max.account.user.vms
Number of virtual machine instances that can exist for an account
max.account.volumes
Number of disk volumes that can exist for an account
max.template.iso.size
Maximum size for a downloaded template or ISO in GB
max.volume.size.gb
Maximum size for a volume in GB
network.throttling.rate
Default data transfer rate in megabits per second allowed per user (supported on XenServer)
snapshot.max.hourly
Maximum recurring hourly snapshots to be retained for a volume. If the limit is reached, early snapshots from the start of the hour are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring hourly snapshots can not be scheduled
snapshot.max.daily
Maximum recurring daily snapshots to be retained for a volume. If the limit is reached, snapshots from the start of the day are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring daily snapshots can not be scheduled
snapshot.max.weekly
Maximum recurring weekly snapshots to be retained for a volume. If the limit is reached, snapshots from the beginning of the week are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring weekly snapshots can not be scheduled
snapshot.max.monthly
Maximum recurring monthly snapshots to be retained for a volume. If the limit is reached, snapshots from the beginning of the month are deleted so that newer ones can be saved. This limit does not apply to manual snapshots. If set to 0, recurring monthly snapshots can not be scheduled.
To modify global configuration parameters, use the global configuration screen in the CloudStack UI. See Setting Global Configuration Parameters

19.4. Default Account Resource Limits

You can limit resource use by accounts. The default limits are set by using global configuration parameters, and they affect all accounts within a cloud. The relevant parameters are those beginning with max.account, for example: max.account.snapshots.
To override a default limit for a particular account, set a per-account resource limit.
  1. Log in to the CloudStack UI.
  2. In the left navigation tree, click Accounts.
  3. Select the account you want to modify. The current limits are displayed. A value of -1 shows that there is no limit in place.
  4. Click the Edit button. editbutton.png: edits the settings

19.5. Per-Domain Limits

CloudStack allows the configuration of limits on a domain basis. With a domain limit in place, all users still have their account limits. They are additionally limited, as a group, to not exceed the resource limits set on their domain. Domain limits aggregate the usage of all accounts in the domain as well as all accounts in all subdomains of that domain. Limits set at the root domain level apply to the sum of resource usage by the accounts in all domains and sub-domains below that root domain.
To set a domain limit:
  1. Log in to the CloudStack UI.
  2. In the left navigation tree, click Domains.
  3. Select the domain you want to modify. The current domain limits are displayed. A value of -1 shows that there is no limit in place.
  4. Click the Edit button editbutton.png: edits the settings.

Chapter 20. Managing Networks and Traffic

20.1. Guest Traffic
20.2. Networking in a Pod
20.3. Networking in a Zone
20.4. Basic Zone Physical Network Configuration
20.5. Advanced Zone Physical Network Configuration
20.5.1. Configure Guest Traffic in an Advanced Zone
20.5.2. Configure Public Traffic in an Advanced Zone
20.6. Using Multiple Guest Networks
20.6.1. Adding an Additional Guest Network
20.6.2. Changing the Network Offering on a Guest Network
20.7. Security Groups
20.7.1. About Security Groups
20.7.2. Adding a Security Group
20.7.3. Security Groups in Advanced Zones (KVM Only)
20.7.4. Enabling Security Groups
20.7.5. Adding Ingress and Egress Rules to a Security Group
20.8. External Firewalls and Load Balancers
20.8.1. About Using a NetScaler Load Balancer
20.8.2. Configuring SNMP Community String on a RHEL Server
20.8.3. Initial Setup of External Firewalls and Load Balancers
20.8.4. Ongoing Configuration of External Firewalls and Load Balancers
20.8.5. Configuring AutoScale
20.9. Load Balancer Rules
20.9.1. Adding a Load Balancer Rule
20.9.2. Sticky Session Policies for Load Balancer Rules
20.10. Guest IP Ranges
20.11. Acquiring a New IP Address
20.12. Releasing an IP Address
20.13. Static NAT
20.13.1. Enabling or Disabling Static NAT
20.14. IP Forwarding and Firewalling
20.14.1. Creating Egress Firewall Rules in an Advanced Zone
20.14.2. Firewall Rules
20.14.3. Port Forwarding
20.15. IP Load Balancing
20.16. DNS and DHCP
20.17. VPN
20.17.1. Configuring VPN
20.17.2. Using VPN with Windows
20.17.3. Using VPN with Mac OS X
20.17.4. Setting Up a Site-to-Site VPN Connection
20.18. About Inter-VLAN Routing
20.19. Configuring a Virtual Private Cloud
20.19.1. About Virtual Private Clouds
20.19.2. Adding a Virtual Private Cloud
20.19.3. Adding Tiers
20.19.4. Configuring Access Control List
20.19.5. Adding a Private Gateway to a VPC
20.19.6. Deploying VMs to the Tier
20.19.7. Acquiring a New IP Address for a VPC
20.19.8. Releasing an IP Address Alloted to a VPC
20.19.9. Enabling or Disabling Static NAT on a VPC
20.19.10. Adding Load Balancing Rules on a VPC
20.19.11. Adding a Port Forwarding Rule on a VPC
20.19.12. Removing Tiers
20.19.13. Editing, Restarting, and Removing a Virtual Private Cloud
20.20. Persistent Networks
20.20.1. Persistent Network Considerations
20.20.2. Creating a Persistent Guest Network
In a CloudStack, guest VMs can communicate with each other using shared infrastructure with the security and user perception that the guests have a private LAN. The CloudStack virtual router is the main component providing networking features for guest traffic.

20.1. Guest Traffic

A network can carry guest traffic only between VMs within one zone. Virtual machines in different zones cannot communicate with each other using their IP addresses; they must communicate with each other by routing through a public IP address.
This figure illustrates a typical guest traffic setup:
Depicts a guest traffic setup.
The Management Server automatically creates a virtual router for each network. A virtual router is a special virtual machine that runs on the hosts. Each virtual router has three network interfaces. Its eth0 interface serves as the gateway for the guest traffic and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the virtual router. Its eth2 interface is assigned a public IP address for public traffic.
The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses.
Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs

20.2. Networking in a Pod

The figure below illustrates network setup within a single pod. The hosts are connected to a pod-level switch. At a minimum, the hosts should have one physical uplink to each switch. Bonded NICs are supported as well. The pod-level switch is a pair of redundant gigabit switches with 10 G uplinks.
networksinglepod.png: diagram showing logical view of network in a pod
Servers are connected as follows:
  • Storage devices are connected to only the network that carries management traffic.
  • Hosts are connected to networks for both management traffic and public traffic.
  • Hosts are also connected to one or more networks carrying guest traffic.
We recommend the use of multiple physical Ethernet cards to implement each network interface as well as redundant switch fabric in order to maximize throughput and improve reliability.

20.3. Networking in a Zone

The following figure illustrates the network setup within a single zone.
networksetupzone.png: Depicts network setup in a single zone
A firewall for management traffic operates in the NAT mode. The network typically is assigned IP addresses in the 192.168.0.0/16 Class B private address space. Each pod is assigned IP addresses in the 192.168.*.0/24 Class C private address space.
Each zone has its own set of public IP addresses. Public IP addresses from different zones do not overlap.

20.4. Basic Zone Physical Network Configuration

In a basic network, configuring the physical network is fairly straightforward. You only need to configure one guest network to carry traffic that is generated by guest VMs. When you first add a zone to CloudStack, you set up the guest network through the Add Zone screens.

20.5. Advanced Zone Physical Network Configuration

Within a zone that uses advanced networking, you need to tell the Management Server how the physical network is set up to carry different kinds of traffic in isolation.

20.5.1. Configure Guest Traffic in an Advanced Zone

These steps assume you have already logged in to the CloudStack UI. To configure the base guest network:
  1. In the left navigation, choose Infrastructure. On Zones, click View More, then click the zone to which you want to add a network.
  2. Click the Network tab.
  3. Click Add guest network.
    The Add guest network window is displayed:
    networksetupzone.png: Depicts network setup in a single zone
  4. Provide the following information:
    • Name. The name of the network. This will be user-visible
    • Display Text: The description of the network. This will be user-visible
    • Zone: The zone in which you are configuring the guest network.
    • Network offering: If the administrator has configured multiple network offerings, select the one you want to use for this network
    • Guest Gateway: The gateway that the guests should use
    • Guest Netmask: The netmask in use on the subnet the guests will use
  5. Click OK.

20.5.2. Configure Public Traffic in an Advanced Zone

In a zone that uses advanced networking, you need to configure at least one range of IP addresses for Internet traffic.

20.6. Using Multiple Guest Networks

In zones that use advanced networking, additional networks for guest traffic may be added at any time after the initial installation. You can also customize the domain name associated with the network by specifying a DNS suffix for each network.
A VM's networks are defined at VM creation time. A VM cannot add or remove networks after it has been created, although the user can go into the guest and remove the IP address from the NIC on a particular network.
Each VM has just one default network. The virtual router's DHCP reply will set the guest's default gateway as that for the default network. Multiple non-default networks may be added to a guest in addition to the single, required default network. The administrator can control which networks are available as the default network.
Additional networks can either be available to all accounts or be assigned to a specific account. Networks that are available to all accounts are zone-wide. Any user with access to the zone can create a VM with access to that network. These zone-wide networks provide little or no isolation between guests.Networks that are assigned to a specific account provide strong isolation.

20.6.1. Adding an Additional Guest Network

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click Add guest network. Provide the following information:
    • Name: The name of the network. This will be user-visible.
    • Display Text: The description of the network. This will be user-visible.
    • Zone. The name of the zone this network applies to. Each zone is a broadcast domain, and therefore each zone has a different IP range for the guest network. The administrator must configure the IP range for each zone.
    • Network offering: If the administrator has configured multiple network offerings, select the one you want to use for this network.
    • Guest Gateway: The gateway that the guests should use.
    • Guest Netmask: The netmask in use on the subnet the guests will use.
  4. Click Create.

20.6.2. Changing the Network Offering on a Guest Network

A user or administrator can change the network offering that is associated with an existing guest network.
  • Log in to the CloudStack UI as an administrator or end user.
  • If you are changing from a network offering that uses the CloudStack virtual router to one that uses external devices as network service providers, you must first stop all the VMs on the network. See "Stopping and Starting Virtual Machines" in the Administrator's Guide.
  • In the left navigation, choose Network.
  • Click the name of the network you want to modify.
  • In the Details tab, click Edit. EditButton.png: button to edit a network
  • In Network Offering, choose the new network offering, then click Apply.
  • A prompt is displayed asking whether you want to keep the existing CIDR. This is to let you know that if you change the network offering, the CIDR will be affected. Choose No to proceed with the change.
  • Wait for the update to complete. Don’t try to restart VMs until the network change is complete.
  • If you stopped any VMs, restart them.

20.7. Security Groups

20.7.1. About Security Groups

Security groups provide a way to isolate traffic to VMs. A security group is a group of VMs that filter their incoming and outgoing traffic according to a set of rules, called ingress and egress rules. These rules filter network traffic according to the IP address that is attempting to communicate with the VM. Security groups are particularly useful in zones that use basic networking, because there is a single guest network for all guest VMs. In advanced zones, security groups are supported only on the KVM hypervisor.

Note

In a zone that uses advanced networking, you can instead define multiple guest networks to isolate traffic to VMs.
Each CloudStack account comes with a default security group that denies all inbound traffic and allows all outbound traffic. The default security group can be modified so that all new VMs inherit some other desired set of rules.
Any CloudStack user can set up any number of additional security groups. When a new VM is launched, it is assigned to the default security group unless another user-defined security group is specified. A VM can be a member of any number of security groups. Once a VM is assigned to a security group, it remains in that group for its entire lifetime; you can not move a running VM from one security group to another.
You can modify a security group by deleting or adding any number of ingress and egress rules. When you do, the new rules apply to all VMs in the group, whether running or stopped.
If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.

20.7.2. Adding a Security Group

A user or administrator can define a new security group.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network
  3. In Select view, choose Security Groups.
  4. Click Add Security Group.
  5. Provide a name and description.
  6. Click OK.
    The new security group appears in the Security Groups Details tab.
  7. To make the security group useful, continue to Adding Ingress and Egress Rules to a Security Group.

20.7.3. Security Groups in Advanced Zones (KVM Only)

CloudStack provides the ability to use security groups to provide isolation between guests on a single shared, zone-wide network in an advanced zone where KVM is the hypervisor. Using security groups in advanced zones rather than multiple VLANs allows a greater range of options for setting up guest isolation in a cloud.
Limitations
The following are not supported for this feature:
  • Two IP ranges with the same VLAN and different gateway or netmask in security group-enabled shared network.
  • Two IP ranges with the same VLAN and different gateway or netmask in account-specific shared networks.
  • Multiple VLAN ranges in security group-enabled shared network.
  • Multiple VLAN ranges in account-specific shared networks.
Security groups must be enabled in the zone in order for this feature to be used.

20.7.4. Enabling Security Groups

In order for security groups to function in a zone, the security groups feature must first be enabled for the zone. The administrator can do this when creating a new zone, by selecting a network offering that includes security groups. The procedure is described in Basic Zone Configuration in the Advanced Installation Guide. The administrator can not enable security groups for an existing zone, only when creating a new zone.

20.7.5. Adding Ingress and Egress Rules to a Security Group

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network
  3. In Select view, choose Security Groups, then click the security group you want .
  4. To add an ingress rule, click the Ingress Rules tab and fill out the following fields to specify what network traffic is allowed into VM instances in this security group. If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.
    • Add by CIDR/Account. Indicate whether the source of the traffic will be defined by IP address (CIDR) or an existing security group in a CloudStack account (Account). Choose Account if you want to allow incoming traffic from all VMs in another security group
    • Protocol. The networking protocol that sources will use to send traffic to the security group. TCP and UDP are typically used for data exchange and end-user communications. ICMP is typically used to send error messages or network monitoring data.
    • Start Port, End Port. (TCP, UDP only) A range of listening ports that are the destination for the incoming traffic. If you are opening a single port, use the same number in both fields.
    • ICMP Type, ICMP Code. (ICMP only) The type of message and error code that will be accepted.
    • CIDR. (Add by CIDR only) To accept only traffic from IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the incoming traffic. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Account, Security Group. (Add by Account only) To accept only traffic from another security group, enter the CloudStack account and name of a security group that has already been defined in that account. To allow traffic between VMs within the security group you are editing now, enter the same name you used in step 7.
    The following example allows inbound HTTP access from anywhere:
    httpaccess.png: allows inbound HTTP access from anywhere
  5. To add an egress rule, click the Egress Rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this security group. If no egress rules are specified, then all traffic will be allowed out. Once egress rules are specified, the following types of traffic are allowed out: traffic specified in egress rules; queries to DNS and DHCP servers; and responses to any traffic that has been allowed in through an ingress rule
    • Add by CIDR/Account. Indicate whether the destination of the traffic will be defined by IP address (CIDR) or an existing security group in a CloudStack account (Account). Choose Account if you want to allow outgoing traffic to all VMs in another security group.
    • Protocol. The networking protocol that VMs will use to send outgoing traffic. TCP and UDP are typically used for data exchange and end-user communications. ICMP is typically used to send error messages or network monitoring data.
    • Start Port, End Port. (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields.
    • ICMP Type, ICMP Code. (ICMP only) The type of message and error code that will be sent
    • CIDR. (Add by CIDR only) To send traffic only to IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Account, Security Group. (Add by Account only) To allow traffic to be sent to another security group, enter the CloudStack account and name of a security group that has already been defined in that account. To allow traffic between VMs within the security group you are editing now, enter its name.
  6. Click Add.

20.8. External Firewalls and Load Balancers

CloudStack is capable of replacing its Virtual Router with an external Juniper SRX device and an optional external NetScaler or F5 load balancer for gateway and load balancing services. In this case, the VMs use the SRX as their gateway.

20.8.1. About Using a NetScaler Load Balancer

Citrix NetScaler is supported as an external network element for load balancing in zones that use advanced networking (also called advanced zones). Set up an external load balancer when you want to provide load balancing through means other than CloudStack’s provided virtual router.
The NetScaler can be set up in direct (outside the firewall) mode. It must be added before any load balancing rules are deployed on guest VMs in the zone.
The functional behavior of the NetScaler with CloudStack is the same as described in the CloudStack documentation for using an F5 external load balancer. The only exception is that the F5 supports routing domains, and NetScaler does not. NetScaler can not yet be used as a firewall.
The Citrix NetScaler comes in three varieties. The following table summarizes how these variants are treated in CloudStack.
NetScaler ADC Type
Description of Capabilities
CloudStack Supported Features
MPX
Physical appliance. Capable of deep packet inspection. Can act as application firewall and load balancer
In advanced zones, load balancer functionality fully supported without limitation. In basic zones, static NAT, elastic IP (EIP), and elastic load balancing (ELB) are also provided
VPX
Virtual appliance. Can run as VM on XenServer, ESXi, and Hyper-V hypervisors. Same functionality as MPX
Supported only on ESXi. Same functional support as for MPX. CloudStack will treat VPX and MPX as the same device type
SDX
Physical appliance. Can create multiple fully isolated VPX instances on a single appliance to support multi-tenant usage
CloudStack will dynamically provision, configure, and manage the lifecycle of VPX instances on the SDX. Provisioned instances are added into CloudStack automatically – no manual configuration by the administrator is required. Once a VPX instance is added into CloudStack, it is treated the same as a VPX on an ESXi host.

20.8.2. Configuring SNMP Community String on a RHEL Server

The SNMP Community string is similar to a user id or password that provides access to a network device, such as router. This string is sent along with all SNMP requests. If the community string is correct, the device responds with the requested information. If the community string is incorrect, the device discards the request and does not respond.
The NetScaler device uses SNMP to communicate with the VMs. You must install SNMP and configure SNMP Community string for a secure communication between the NetScaler device and the RHEL machine.
  1. Ensure that you installed SNMP on RedHat. If not, run the following command:
    yum install net-snmp-utils
  2. Edit the /etc/snmp/snmpd.conf file to allow the SNMP polling from the NetScaler device.
    1. Map the community name into a security name (local and mynetwork, depending on where the request is coming from):

      Note

      Use a strong password instead of public when you edit the following table.
      #         sec.name   source        community
      com2sec    local      localhost     public
      com2sec   mynetwork   0.0.0.0       public

      Note

      Setting to 0.0.0.0 allows all IPs to poll the NetScaler server.
    2. Map the security names into group names:
      #      group.name   sec.model  sec.name
      group   MyRWGroup     v1         local
      group   MyRWGroup     v2c        local
      group   MyROGroup     v1        mynetwork
      group   MyROGroup     v2c       mynetwork
    3. Create a view to allow the groups to have the permission to:
      incl/excl subtree mask view all included .1
    4. Grant access with different write permissions to the two groups to the view you created.
      # context     sec.model     sec.level     prefix     read     write     notif
        access      MyROGroup ""  any noauth     exact      all      none     none
        access      MyRWGroup ""  any noauth     exact      all      all      all
  3. Unblock SNMP in iptables.
    iptables -A INPUT -p udp --dport 161 -j ACCEPT
  4. Start the SNMP service:
    service snmpd start
  5. Ensure that the SNMP service is started automatically during the system startup:
    chkconfig snmpd on

20.8.3. Initial Setup of External Firewalls and Load Balancers

When the first VM is created for a new account, CloudStack programs the external firewall and load balancer to work with the VM. The following objects are created on the firewall:
  • A new logical interface to connect to the account's private VLAN. The interface IP is always the first IP of the account's private subnet (e.g. 10.1.1.1).
  • A source NAT rule that forwards all outgoing traffic from the account's private VLAN to the public Internet, using the account's public IP address as the source address
  • A firewall filter counter that measures the number of bytes of outgoing traffic for the account
The following objects are created on the load balancer:
  • A new VLAN that matches the account's provisioned Zone VLAN
  • A self IP for the VLAN. This is always the second IP of the account's private subnet (e.g. 10.1.1.2).

20.8.4. Ongoing Configuration of External Firewalls and Load Balancers

Additional user actions (e.g. setting a port forward) will cause further programming of the firewall and load balancer. A user may request additional public IP addresses and forward traffic received at these IPs to specific VMs. This is accomplished by enabling static NAT for a public IP address, assigning the IP to a VM, and specifying a set of protocols and port ranges to open. When a static NAT rule is created, CloudStack programs the zone's external firewall with the following objects:
  • A static NAT rule that maps the public IP address to the private IP address of a VM.
  • A security policy that allows traffic within the set of protocols and port ranges that are specified.
  • A firewall filter counter that measures the number of bytes of incoming traffic to the public IP.
The number of incoming and outgoing bytes through source NAT, static NAT, and load balancing rules is measured and saved on each external element. This data is collected on a regular basis and stored in the CloudStack database.

20.8.5. Configuring AutoScale

AutoScaling allows you to scale your back-end services or application VMs up or down seamlessly and automatically according to the conditions you define. With AutoScaling enabled, you can ensure that the number of VMs you are using seamlessly scale up when demand increases, and automatically decreases when demand subsides. Using AutoScaling, you can automatically shut down instances you don't need, or launch new instances, depending on demand.
NetScaler AutoScaling is designed to seamlessly launch or terminate VMs based on user-defined conditions. Conditions for triggering a scaleup or scaledown action can vary from a simple use case like monitoring the CPU usage of a server to a complex use case of monitoring a combination of server's responsiveness and its CPU usage. For example, you can configure AutoScaling to launch an additional VM whenever CPU usage exceeds 80 percent for 15 minutes, or to remove a VM whenever CPU usage is less than 20 percent for 30 minutes.
CloudStack uses the NetScaler load balancer to monitor all aspects of a system's health and work in unison with CloudStack to initiate scale-up or scale-down actions.

Note

AutoScale is supported on NetScaler Release 10 Build 73.e and beyond.
Prerequisites
Before you configure an AutoScale rule, consider the following:
  • Ensure that the necessary template is prepared before configuring AutoScale. When a VM is deployed by using a template and when it comes up, the application should be up and running.

    Note

    If the application is not running, the NetScaler device considers the VM as ineffective and continues provisioning the VMs unconditionally until the resource limit is exhausted.
  • Deploy the templates you prepared. Ensure that the applications come up on the first boot and is ready to take the traffic. Observe the time requires to deploy the template. Consider this time when you specify the quiet time while configuring AutoScale.
  • The AutoScale feature supports the SNMP counters that can be used to define conditions for taking scale up or scale down actions. To monitor the SNMP-based counter, ensure that the SNMP agent is installed in the template used for creating the AutoScale VMs, and the SNMP operations work with the configured SNMP community and port by using standard SNMP managers. For example, see Section 20.8.2, “Configuring SNMP Community String on a RHEL Server” to configure SNMP on a RHEL machine.
  • Ensure that the endpointe.url parameter present in the Global Settings is set to the Management Server API URL. For example, http://10.102.102.22:8080/client/api. In a multi-node Management Server deployment, use the virtual IP address configured in the load balancer for the management server’s cluster. Additionally, ensure that the NetScaler device has access to this IP address to provide AutoScale support.
    If you update the endpointe.url, disable the AutoScale functionality of the load balancer rules in the system, then enable them back to reflect the changes. For more information see Updating an AutoScale Configuration
  • If the API Key and Secret Key are regenerated for an AutoScale user, ensure that the AutoScale functionality of the load balancers that the user participates in are disabled and then enabled to reflect the configuration changes in the NetScaler.
  • In an advanced Zone, ensure that at least one VM should be present before configuring a load balancer rule with AutoScale. Having one VM in the network ensures that the network is in implemented state for configuring AutoScale.
Configuration
Specify the following:
autoscaleateconfig.png: Configuring AutoScale
  • Template: A template consists of a base OS image and application. A template is used to provision the new instance of an application on a scaleup action. When a VM is deployed from a template, the VM can start taking the traffic from the load balancer without any admin intervention. For example, if the VM is deployed for a Web service, it should have the Web server running, the database connected, and so on.
  • Compute offering: A predefined set of virtual hardware attributes, including CPU speed, number of CPUs, and RAM size, that the user can select when creating a new virtual machine instance. Choose one of the compute offerings to be used while provisioning a VM instance as part of scaleup action.
  • Min Instance: The minimum number of active VM instances that is assigned to a load balancing rule. The active VM instances are the application instances that are up and serving the traffic, and are being load balanced. This parameter ensures that a load balancing rule has at least the configured number of active VM instances are available to serve the traffic.

    Note

    If an application, such as SAP, running on a VM instance is down for some reason, the VM is then not counted as part of Min Instance parameter, and the AutoScale feature initiates a scaleup action if the number of active VM instances is below the configured value. Similarly, when an application instance comes up from its earlier down state, this application instance is counted as part of the active instance count and the AutoScale process initiates a scaledown action when the active instance count breaches the Max instance value.
  • Max Instance: Maximum number of active VM instances that should be assigned to a load balancing rule. This parameter defines the upper limit of active VM instances that can be assigned to a load balancing rule.
    Specifying a large value for the maximum instance parameter might result in provisioning large number of VM instances, which in turn leads to a single load balancing rule exhausting the VM instances limit specified at the account or domain level.

    Note

    If an application, such as SAP, running on a VM instance is down for some reason, the VM is not counted as part of Max Instance parameter. So there may be scenarios where the number of VMs provisioned for a scaleup action might be more than the configured Max Instance value. Once the application instances in the VMs are up from an earlier down state, the AutoScale feature starts aligning to the configured Max Instance value.
Specify the following scale-up and scale-down policies:
  • Duration: The duration, in seconds, for which the conditions you specify must be true to trigger a scaleup action. The conditions defined should hold true for the entire duration you specify for an AutoScale action to be invoked.
  • Counter: The performance counters expose the state of the monitored instances. By default, CloudStack offers four performance counters: Three SNMP counters and one NetScaler counter. The SNMP counters are Linux User CPU, Linux System CPU, and Linux CPU Idle. The NetScaler counter is ResponseTime. The root administrator can add additional counters into CloudStack by using the CloudStack API.
  • Operator: The following five relational operators are supported in AutoScale feature: Greater than, Less than, Less than or equal to, Greater than or equal to, and Equal to.
  • Threshold: Threshold value to be used for the counter. Once the counter defined above breaches the threshold value, the AutoScale feature initiates a scaleup or scaledown action.
  • Add: Click Add to add the condition.
Additionally, if you want to configure the advanced settings, click Show advanced settings, and specify the following:
  • Polling interval: Frequency in which the conditions, combination of counter, operator and threshold, are to be evaluated before taking a scale up or down action. The default polling interval is 30 seconds.
  • Quiet Time: This is the cool down period after an AutoScale action is initiated. The time includes the time taken to complete provisioning a VM instance from its template and the time taken by an application to be ready to serve traffic. This quiet time allows the fleet to come up to a stable state before any action can take place. The default is 300 seconds.
  • Destroy VM Grace Period: The duration in seconds, after a scaledown action is initiated, to wait before the VM is destroyed as part of scaledown action. This is to ensure graceful close of any pending sessions or transactions being served by the VM marked for destroy. The default is 120 seconds.
  • Security Groups: Security groups provide a way to isolate traffic to the VM instances. A security group is a group of VMs that filter their incoming and outgoing traffic according to a set of rules, called ingress and egress rules. These rules filter network traffic according to the IP address that is attempting to communicate with the VM.
  • Disk Offerings: A predefined set of disk size for primary data storage.
  • SNMP Community: The SNMP community string to be used by the NetScaler device to query the configured counter value from the provisioned VM instances. Default is public.
  • SNMP Port: The port number on which the SNMP agent that run on the provisioned VMs is listening. Default port is 161.
  • User: This is the user that the NetScaler device use to invoke scaleup and scaledown API calls to the cloud. If no option is specified, the user who configures AutoScaling is applied. Specify another user name to override.
  • Apply: Click Apply to create the AutoScale configuration.
Disabling and Enabling an AutoScale Configuration
If you want to perform any maintenance operation on the AutoScale VM instances, disable the AutoScale configuration. When the AutoScale configuration is disabled, no scaleup or scaledown action is performed. You can use this downtime for the maintenance activities. To disable the AutoScale configuration, click the Disable AutoScale EnableDisable.png: button to enable or disable AutoScale. button.
The button toggles between enable and disable, depending on whether AutoScale is currently enabled or not. After the maintenance operations are done, you can enable the AutoScale configuration back. To enable, open the AutoScale configuration page again, then click the Enable AutoScale EnableDisable.png: button to enable or disable AutoScale. button.
Updating an AutoScale Configuration
You can update the various parameters and add or delete the conditions in a scaleup or scaledown rule. Before you update an AutoScale configuration, ensure that you disable the AutoScale load balancer rule by clicking the Disable AutoScale button.
After you modify the required AutoScale parameters, click Apply. To apply the new AutoScale policies, open the AutoScale configuration page again, then click the Enable AutoScale button.
Runtime Considerations
  • An administrator should not assign a VM to a load balancing rule which is configured for AutoScale.
  • Before a VM provisioning is completed if NetScaler is shutdown or restarted, the provisioned VM cannot be a part of the load balancing rule though the intent was to assign it to a load balancing rule. To workaround, rename the AutoScale provisioned VMs based on the rule name or ID so at any point of time the VMs can be reconciled to its load balancing rule.
  • Making API calls outside the context of AutoScale, such as destroyVM, on an autoscaled VM leaves the load balancing configuration in an inconsistent state. Though VM is destroyed from the load balancer rule, NetScaler continues to show the VM as a service assigned to a rule.

20.9. Load Balancer Rules

A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs.

Note

If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudStack virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function.

20.9.1. Adding a Load Balancer Rule

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click the name of the network where you want to load balance the traffic.
  4. Click View IP Addresses.
  5. Click the IP address for which you want to create the rule, then click the Configuration tab.
  6. In the Load Balancing node of the diagram, click View All.
    In a Basic zone, you can also create a load balancing rule without acquiring or selecting an IP address. CloudStack internally assign an IP when you create the load balancing rule, which is listed in the IP Addresses page when the rule is created.
    To do that, select the name of the network, then click Add Load Balancer tab. Continue with 7.
  7. Fill in the following:
    • Name: A name for the load balancer rule.
    • Public Port: The port receiving incoming traffic to be balanced.
    • Private Port: The port that the VMs will use to receive the traffic.
    • Algorithm: Choose the load balancing algorithm you want CloudStack to use. CloudStack supports a variety of well-known algorithms. If you are not familiar with these choices, you will find plenty of information about them on the Internet.
    • Stickiness: (Optional) Click Configure and choose the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules.
    • AutoScale: Click Configure and complete the AutoScale configuration as explained in Section 20.8.5, “Configuring AutoScale”.
  8. Click Add VMs, then select two or more VMs that will divide the load of incoming traffic, and click Apply.
    The new load balancer rule appears in the list. You can repeat these steps to add more load balancer rules for this IP address.

20.9.2. Sticky Session Policies for Load Balancer Rules

Sticky sessions are used in Web-based applications to ensure continued availability of information across the multiple requests in a user's session. For example, if a shopper is filling a cart, you need to remember what has been purchased so far. The concept of "stickiness" is also referred to as persistence or maintaining state.
Any load balancer rule defined in CloudStack can have a stickiness policy. The policy consists of a name, stickiness method, and parameters. The parameters are name-value pairs or flags, which are defined by the load balancer vendor. The stickiness method could be load balancer-generated cookie, application-generated cookie, or source-based. In the source-based method, the source IP address is used to identify the user and locate the user’s stored data. In the other methods, cookies are used. The cookie generated by the load balancer or application is included in request and response URLs to create persistence. The cookie name can be specified by the administrator or automatically generated. A variety of options are provided to control the exact behavior of cookies, such as how they are generated and whether they are cached.
For the most up to date list of available stickiness methods, see the CloudStack UI or call listNetworks and check the SupportedStickinessMethods capability.

20.10. Guest IP Ranges

The IP ranges for guest network traffic are set on a per-account basis by the user. This allows the users to configure their network in a fashion that will enable VPN linking between their guest network and their clients.

20.11. Acquiring a New IP Address

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click the name of the network where you want to work with.
  4. Click View IP Addresses.
  5. Click Acquire New IP, and click Yes in the confirmation dialog.
    You are prompted for confirmation because, typically, IP addresses are a limited resource. Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding or static NAT rules.

20.12. Releasing an IP Address

When the last rule for an IP address is removed, you can release that IP address. The IP address still belongs to the VPC; however, it can be picked up for any guest network again.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click the name of the network where you want to work with.
  4. Click View IP Addresses.
  5. Click the IP address you want to release.
  6. Click the Release IP button. ReleaseIPButton.png: button to release an IP

20.13. Static NAT

A static NAT rule maps a public IP address to the private IP address of a VM in order to allow Internet traffic into the VM. The public IP address always remains the same, which is why it is called “static” NAT. This section tells how to enable or disable static NAT for a particular IP address.

20.13.1. Enabling or Disabling Static NAT

If port forwarding rules are already in effect for an IP address, you cannot enable static NAT to that IP.
If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click the name of the network where you want to work with.
  4. Click View IP Addresses.
  5. Click the IP address you want to work with.
  6. Click the Static NAT ReleaseIPButton.png: button to release an IP button.
    The button toggles between Enable and Disable, depending on whether static NAT is currently enabled for the IP address.
  7. If you are enabling static NAT, a dialog appears where you can choose the destination VM and click Apply.

20.14. IP Forwarding and Firewalling

By default, all incoming traffic to the public IP address is rejected. All outgoing traffic from the guests is also blocked by default.
To allow outgoing traffic, follow the procedure in Section 20.14.1, “Creating Egress Firewall Rules in an Advanced Zone”.
To allow incoming traffic, users may set up firewall rules and/or port forwarding rules. For example, you can use a firewall rule to open a range of ports on the public IP address, such as 33 through 44. Then use port forwarding rules to direct traffic from individual ports within that range to specific ports on user VMs. For example, one port forwarding rule could route incoming traffic on the public IP's port 33 to port 100 on one user VM's private IP. For more information, see Section 20.14.2, “Firewall Rules” and Section 20.14.3, “Port Forwarding”.

20.14.1. Creating Egress Firewall Rules in an Advanced Zone

Note

The egress firewall rules are supported only on virtual routers.
The egress traffic originates from a private network to a public network, such as the Internet. By default, the egress traffic is blocked, so no outgoing traffic is allowed from a guest network to the Internet. However, you can control the egress traffic in an Advanced zone by creating egress firewall rules. When an egress firewall rule is applied, the traffic specific to the rule is allowed and the remaining traffic is blocked. When all the firewall rules are removed the default policy, Block, is applied.
Consider the following scenarios to apply egress firewall rules:
  • Allow the egress traffic from specified source CIDR. The Source CIDR is part of guest network CIDR.
  • Allow the egress traffic with destination protocol TCP,UDP,ICMP, or ALL.
  • Allow the egress traffic with destination protocol and port range. The port range is specified for TCP, UDP or for ICMP type and code.
To configure an egress firewall rule:
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In Select view, choose Guest networks, then click the Guest network you want.
  4. To add an egress rule, click the Egress rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this guest network:
    egress-firewall-rule.png: adding an egress firewall rule
    • CIDR: (Add by CIDR only) To send traffic only to the IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Protocol: The networking protocol that VMs uses to send outgoing traffic. The TCP and UDP protocols are typically used for data exchange and end-user communications. The ICMP protocol is typically used to send error messages or network monitoring data.
    • Start Port, End Port: (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields.
    • ICMP Type, ICMP Code: (ICMP only) The type of message and error code that are sent.
  5. Click Add.

20.14.2. Firewall Rules

By default, all incoming traffic to the public IP address is rejected by the firewall. To allow external traffic, you can open firewall ports by specifying firewall rules. You can optionally specify one or more CIDRs to filter the source IPs. This is useful when you want to allow only incoming requests from certain IP addresses.
You cannot use firewall rules to open ports for an elastic IP address. When elastic IP is used, outside access is instead controlled through the use of security groups. See Section 20.7.2, “Adding a Security Group”.
In an advanced zone, you can also create egress firewall rules by using the virtual router. For more information, see Section 20.14.1, “Creating Egress Firewall Rules in an Advanced Zone”.
Firewall rules can be created using the Firewall tab in the Management Server UI. This tab is not displayed by default when CloudStack is installed. To display the Firewall tab, the CloudStack administrator must set the global configuration parameter firewall.rule.ui.enabled to "true."
To create a firewall rule:
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click the name of the network where you want to work with.
  4. Click View IP Addresses.
  5. Click the IP address you want to work with.
  6. Click the Configuration tab and fill in the following values.
    • Source CIDR. (Optional) To accept only traffic from IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. Example: 192.168.0.0/22. Leave empty to allow all CIDRs.
    • Protocol. The communication protocol in use on the opened port(s).
    • Start Port and End Port. The port(s) you want to open on the firewall. If you are opening a single port, use the same number in both fields
    • ICMP Type and ICMP Code. Used only if Protocol is set to ICMP. Provide the type and code required by the ICMP protocol to fill out the ICMP header. Refer to ICMP documentation for more details if you are not sure what to enter
  7. Click Add.

20.14.3. Port Forwarding

A port forward service is a set of port forwarding rules that define a policy. A port forward service is then applied to one or more guest VMs. The guest VM then has its inbound network access managed according to the policy defined by the port forwarding service. You can optionally specify one or more CIDRs to filter the source IPs. This is useful when you want to allow only incoming requests from certain IP addresses to be forwarded.
A guest VM can be in any number of port forward services. Port forward services can be defined but have no members. If a guest VM is part of more than one network, port forwarding rules will function only if they are defined on the default network
You cannot use port forwarding to open ports for an elastic IP address. When elastic IP is used, outside access is instead controlled through the use of security groups. See Security Groups.
To set up port forwarding:
  1. Log in to the CloudStack UI as an administrator or end user.
  2. If you have not already done so, add a public IP address range to a zone in CloudStack. See Adding a Zone and Pod in the Installation Guide.
  3. Add one or more VM instances to CloudStack.
  4. In the left navigation bar, click Network.
  5. Click the name of the guest network where the VMs are running.
  6. Choose an existing IP address or acquire a new IP address. See Section 20.11, “Acquiring a New IP Address”. Click the name of the IP address in the list.
  7. Click the Configuration tab.
  8. In the Port Forwarding node of the diagram, click View All.
  9. Fill in the following:
    • Public Port. The port to which public traffic will be addressed on the IP address you acquired in the previous step.
    • Private Port. The port on which the instance is listening for forwarded public traffic.
    • Protocol. The communication protocol in use between the two ports
  10. Click Add.

20.15. IP Load Balancing

The user may choose to associate the same public IP for multiple guests. CloudStack implements a TCP-level load balancer with the following policies.
  • Round-robin
  • Least connection
  • Source IP
This is similar to port forwarding but the destination may be multiple IP addresses.

20.16. DNS and DHCP

The Virtual Router provides DNS and DHCP services to the guests. It proxies DNS requests to the DNS server configured on the Availability Zone.

20.17. VPN

CloudStack account owners can create virtual private networks (VPN) to access their virtual machines. If the guest network is instantiated from a network offering that offers the Remote Access VPN service, the virtual router (based on the System VM) is used to provide the service. CloudStack provides a L2TP-over-IPsec-based remote access VPN service to guest virtual networks. Since each network gets its own virtual router, VPNs are not shared across the networks. VPN clients native to Windows, Mac OS X and iOS can be used to connect to the guest networks. The account owner can create and manage users for their VPN. CloudStack does not use its account database for this purpose but uses a separate table. The VPN user database is shared across all the VPNs created by the account owner. All VPN users get access to all VPNs created by the account owner.

Note

Make sure that not all traffic goes through the VPN. That is, the route installed by the VPN should be only for the guest network and not for all traffic.
  • Road Warrior / Remote Access. Users want to be able to connect securely from a home or office to a private network in the cloud. Typically, the IP address of the connecting client is dynamic and cannot be preconfigured on the VPN server.
  • Site to Site. In this scenario, two private subnets are connected over the public Internet with a secure VPN tunnel. The cloud user’s subnet (for example, an office network) is connected through a gateway to the network in the cloud. The address of the user’s gateway must be preconfigured on the VPN server in the cloud. Note that although L2TP-over-IPsec can be used to set up Site-to-Site VPNs, this is not the primary intent of this feature. For more information, see Section 20.17.4, “Setting Up a Site-to-Site VPN Connection”

20.17.1. Configuring VPN

To set up VPN for the cloud:
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, click Global Settings.
  3. Set the following global configuration parameters.
    • remote.access.vpn.client.ip.range – The range of IP addresses to be allocated to remote access VPN clients. The first IP in the range is used by the VPN server.
    • remote.access.vpn.psk.length – Length of the IPSec key.
    • remote.access.vpn.user.limit – Maximum number of VPN users per account.
To enable VPN for a particular network:
  1. Log in as a user or administrator to the CloudStack UI.
  2. In the left navigation, click Network.
  3. Click the name of the network you want to work with.
  4. Click View IP Addresses.
  5. Click one of the displayed IP address names.
  6. Click the Enable VPN button. AttachDiskButton.png: button to attach a volume
    The IPsec key is displayed in a popup window.

20.17.2. Using VPN with Windows

The procedure to use VPN varies by Windows version. Generally, the user must edit the VPN properties and make sure that the default route is not the VPN. The following steps are for Windows L2TP clients on Windows Vista. The commands should be similar for other Windows versions.
  1. Log in to the CloudStack UI and click on the source NAT IP for the account. The VPN tab should display the IPsec preshared key. Make a note of this and the source NAT IP. The UI also lists one or more users and their passwords. Choose one of these users, or, if none exists, add a user and password.
  2. On the Windows box, go to Control Panel, then select Network and Sharing center. Click Setup a connection or network.
  3. In the next dialog, select No, create a new connection.
  4. In the next dialog, select Use my Internet Connection (VPN).
  5. In the next dialog, enter the source NAT IP from step 1 and give the connection a name. Check Don't connect now.
  6. In the next dialog, enter the user name and password selected in step 1.
  7. Click Create.
  8. Go back to the Control Panel and click Network Connections to see the new connection. The connection is not active yet.
  9. Right-click the new connection and select Properties. In the Properties dialog, select the Networking tab.
  10. In Type of VPN, choose L2TP IPsec VPN, then click IPsec settings. Select Use preshared key. Enter the preshared key from Step 1.
  11. The connection is ready for activation. Go back to Control Panel -> Network Connections and double-click the created connection.
  12. Enter the user name and password from Step 1.

20.17.3. Using VPN with Mac OS X

First, be sure you've configured the VPN settings in your CloudStack install. This section is only concerned with connecting via Mac OS X to your VPN.
Note, these instructions were written on Mac OS X 10.7.5. They may differ slightly in older or newer releases of Mac OS X.
  1. On your Mac, open System Preferences and click Network.
  2. Make sure Send all traffic over VPN connection is not checked.
  3. If your preferences are locked, you'll need to click the lock in the bottom left-hand corner to make any changes and provide your administrator credentials.
  4. You will need to create a new network entry. Click the plus icon on the bottom left-hand side and you'll see a dialog that says "Select the interface and enter a name for the new service." Select VPN from the Interface drop-down menu, and "L2TP over IPSec" for the VPN Type. Enter whatever you like within the "Service Name" field.
  5. You'll now have a new network interface with the name of whatever you put in the "Service Name" field. For the purposes of this example, we'll assume you've named it "CloudStack." Click on that interface and provide the IP address of the interface for your VPN under the Server Address field, and the user name for your VPN under Account Name.
  6. Click Authentication Settings, and add the user's password under User Authentication and enter the pre-shared IPSec key in the Shared Secret field under Machine Authentication. Click OK.
  7. You may also want to click the "Show VPN status in menu bar" but that's entirely optional.
  8. Now click "Connect" and you will be connected to the CloudStack VPN.

20.17.4. Setting Up a Site-to-Site VPN Connection

A Site-to-Site VPN connection helps you establish a secure connection from an enterprise datacenter to the cloud infrastructure. This allows users to access the guest VMs by establishing a VPN connection to the virtual router of the account from a device in the datacenter of the enterprise. Having this facility eliminates the need to establish VPN connections to individual VMs.
The supported endpoints on the remote datacenters are:
  • Cisco ISR with IOS 12.4 or later
  • Juniper J-Series routers with JunOS 9.5 or later

Note

In addition to the specific Cisco and Juniper devices listed above, the expectation is that any Cisco or Juniper device running on the supported operating systems are able to establish VPN connections.
To set up a Site-to-Site VPN connection, perform the following:
  1. Create a Virtual Private Cloud (VPC).
  2. Create a VPN Customer Gateway.
  3. Create a VPN gateway for the VPC that you created.
  4. Create VPN connection from the VPC VPN gateway to the customer VPN gateway.

Note

Appropriate events are generated on the CloudStack UI when status of a Site-to-Site VPN connection changes from connected to disconnected, or vice versa. Currently no events are generated when establishing a VPN connection fails or pending.

20.17.4.1. Creating and Updating a VPN Customer Gateway

Note

A VPN customer gateway can be connected to only one VPN gateway at a time.
To add a VPN Customer Gateway:
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPN Customer Gateway.
  4. Click Add site-to-site VPN.
    addvpncustomergateway.png: adding a customer gateway.
    Provide the following information:
    • Name: A unique name for the VPN customer gateway you create.
    • Gateway: The IP address for the remote gateway.
    • CIDR list: The guest CIDR list of the remote subnets. Enter a CIDR or a comma-separated list of CIDRs. Ensure that a guest CIDR list is not overlapped with the VPC’s CIDR, or another guest CIDR. The CIDR must be RFC1918-compliant.
    • IPsec Preshared Key: Preshared keying is a method where the endpoints of the VPN share a secret key. This key value is used to authenticate the customer gateway and the VPC VPN gateway to each other.

      Note

      The IKE peers (VPN end points) authenticate each other by computing and sending a keyed hash of data that includes the Preshared key. If the receiving peer is able to create the same hash independently by using its Preshared key, it knows that both peers must share the same secret, thus authenticating the customer gateway.
    • IKE Encryption: The Internet Key Exchange (IKE) policy for phase-1. The supported encryption algorithms are AES128, AES192, AES256, and 3DES. Authentication is accomplished through the Preshared Keys.

      Note

      The phase-1 is the first phase in the IKE process. In this initial negotiation phase, the two VPN endpoints agree on the methods to be used to provide security for the underlying IP traffic. The phase-1 authenticates the two VPN gateways to each other, by confirming that the remote gateway has a matching Preshared Key.
    • IKE Hash: The IKE hash for phase-1. The supported hash algorithms are SHA1 and MD5.
    • IKE DH: A public-key cryptography protocol which allows two parties to establish a shared secret over an insecure communications channel. The 1536-bit Diffie-Hellman group is used within IKE to establish session keys. The supported options are None, Group-5 (1536-bit) and Group-2 (1024-bit).
    • ESP Encryption: Encapsulating Security Payload (ESP) algorithm within phase-2. The supported encryption algorithms are AES128, AES192, AES256, and 3DES.

      Note

      The phase-2 is the second phase in the IKE process. The purpose of IKE phase-2 is to negotiate IPSec security associations (SA) to set up the IPSec tunnel. In phase-2, new keying material is extracted from the Diffie-Hellman key exchange in phase-1, to provide session keys to use in protecting the VPN data flow.
    • ESP Hash: Encapsulating Security Payload (ESP) hash for phase-2. Supported hash algorithms are SHA1 and MD5.
    • Perfect Forward Secrecy: Perfect Forward Secrecy (or PFS) is the property that ensures that a session key derived from a set of long-term public and private keys will not be compromised. This property enforces a new Diffie-Hellman key exchange. It provides the keying material that has greater key material life and thereby greater resistance to cryptographic attacks. The available options are None, Group-5 (1536-bit) and Group-2 (1024-bit). The security of the key exchanges increase as the DH groups grow larger, as does the time of the exchanges.

      Note

      When PFS is turned on, for every negotiation of a new phase-2 SA the two gateways must generate a new set of phase-1 keys. This adds an extra layer of protection that PFS adds, which ensures if the phase-2 SA’s have expired, the keys used for new phase-2 SA’s have not been generated from the current phase-1 keying material.
    • IKE Lifetime (seconds): The phase-1 lifetime of the security association in seconds. Default is 86400 seconds (1 day). Whenever the time expires, a new phase-1 exchange is performed.
    • ESP Lifetime (seconds): The phase-2 lifetime of the security association in seconds. Default is 3600 seconds (1 hour). Whenever the value is exceeded, a re-key is initiated to provide a new IPsec encryption and authentication session keys.
    • Dead Peer Detection: A method to detect an unavailable Internet Key Exchange (IKE) peer. Select this option if you want the virtual router to query the liveliness of its IKE peer at regular intervals. It’s recommended to have the same configuration of DPD on both side of VPN connection.
  5. Click OK.
Updating and Removing a VPN Customer Gateway
You can update a customer gateway either with no VPN connection, or related VPN connection is in error state.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPN Customer Gateway.
  4. Select the VPN customer gateway you want to work with.
  5. To modify the required parameters, click the Edit VPN Customer Gateway button edit.png: button to edit a VPN customer gateway
  6. To remove the VPN customer gateway, click the Delete VPN Customer Gateway button delete.png: button to remove a VPN customer gateway
  7. Click OK.

20.17.4.2. Creating a VPN gateway for the VPC

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to deploy the VMs.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ACLs
  6. Select Site-to-Site VPN.
    If you are creating the VPN gateway for the first time, selecting Site-to-Site VPN prompts you to create a VPN gateway.
  7. In the confirmation dialog, click Yes to confirm.
    Within a few moments, the VPN gateway is created. You will be prompted to view the details of the VPN gateway you have created. Click Yes to confirm.
    The following details are displayed in the VPN Gateway page:
    • IP Address
    • Account
    • Domain

20.17.4.3. Creating a VPN Connection

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you create for the account are listed in the page.
  4. Click the Configure button of the VPC to which you want to deploy the VMs.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ASLs
  6. Select Site-to-Site VPN.
    The Site-to-Site VPN page is displayed.
  7. From the Select View drop-down, ensure that VPN Connection is selected.
  8. Click Create VPN Connection.
    The Create VPN Connection dialog is displayed:
    createvpnconnection.png: creating a vpn connection to the customer gateway.
  9. Select the desired customer gateway, then click OK to confirm.
    Within a few moments, the VPN Connection is displayed.
    The following information on the VPN connection is displayed:
    • IP Address
    • Gateway
    • State
    • IPSec Preshared Key
    • IKE Policy
    • ESP Policy

20.17.4.4. Restarting and Removing a VPN Connection

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to deploy the VMs.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ASLs
  6. Select Site-to-Site VPN.
    The Site-to-Site VPN page is displayed.
  7. From the Select View drop-down, ensure that VPN Connection is selected.
    All the VPN connections you created are displayed.
  8. Select the VPN connection you want to work with.
    The Details tab is displayed.
  9. To remove a VPN connection, click the Delete VPN connection button remove-vpn.png: button to remove a VPN connection
    To restart a VPN connection, click the Reset VPN connection button present in the Details tab. reset-vpn.png: button to reset a VPN connection

20.18. About Inter-VLAN Routing

Inter-VLAN Routing is the capability to route network traffic between VLANs. This feature enables you to build Virtual Private Clouds (VPC), an isolated segment of your cloud, that can hold multi-tier applications. These tiers are deployed on different VLANs that can communicate with each other. You provision VLANs to the tiers your create, and VMs can be deployed on different tiers. The VLANs are connected to a virtual router, which facilitates communication between the VMs. In effect, you can segment VMs by means of VLANs into different networks that can host multi-tier applications, such as Web, Application, or Database. Such segmentation by means of VLANs logically separate application VMs for higher security and lower broadcasts, while remaining physically connected to the same device.
This feature is supported on XenServer and VMware hypervisors.
The major advantages are:
  • The administrator can deploy a set of VLANs and allow users to deploy VMs on these VLANs. A guest VLAN is randomly alloted to an account from a pre-specified set of guest VLANs. All the VMs of a certain tier of an account reside on the guest VLAN allotted to that account.

    Note

    A VLAN allocated for an account cannot be shared between multiple accounts.
  • The administrator can allow users create their own VPC and deploy the application. In this scenario, the VMs that belong to the account are deployed on the VLANs allotted to that account.
  • Both administrators and users can create multiple VPCs. The guest network NIC is plugged to the VPC virtual router when the first VM is deployed in a tier.
  • The administrator can create the following gateways to send to or receive traffic from the VMs:
  • Both administrators and users can create various possible destinations-gateway combinations. However, only one gateway of each type can be used in a deployment.
    For example:
    • VLANs and Public Gateway: For example, an application is deployed in the cloud, and the Web application VMs communicate with the Internet.
    • VLANs, VPN Gateway, and Public Gateway: For example, an application is deployed in the cloud; the Web application VMs communicate with the Internet; and the database VMs communicate with the on-premise devices.
  • The administrator can define Access Control List (ACL) on the virtual router to filter the traffic among the VLANs or between the Internet and a VLAN. You can define ACL based on CIDR, port range, protocol, type code (if ICMP protocol is selected) and Ingress/Egress type.
The following figure shows the possible deployment scenarios of a Inter-VLAN setup:
mutltier.png: a multi-tier setup.
To set up a multi-tier Inter-VLAN deployment, see Section 20.19, “Configuring a Virtual Private Cloud”.

20.19. Configuring a Virtual Private Cloud

20.19.1. About Virtual Private Clouds

CloudStack Virtual Private Cloud is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. You can launch VMs in the virtual network that can have private addresses in the range of your choice, for example: 10.0.0.0/16. You can define network tiers within your VPC network range, which in turn enables you to group similar kinds of instances based on IP address range.
For example, if a VPC has the private range 10.0.0.0/16, its guest networks can have the network ranges 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, and so on.
Major Components of a VPC:
A VPC is comprised of the following network components:
  • VPC: A VPC acts as a container for multiple isolated networks that can communicate with each other via its virtual router.
  • Network Tiers: Each tier acts as an isolated network with its own VLANs and CIDR list, where you can place groups of resources, such as VMs. The tiers are segmented by means of VLANs. The NIC of each tier acts as its gateway.
  • Virtual Router: A virtual router is automatically created and started when you create a VPC. The virtual router connect the tiers and direct traffic among the public gateway, the VPN gateways, and the NAT instances. For each tier, a corresponding NIC and IP exist in the virtual router. The virtual router provides DNS and DHCP services through its IP.
  • Public Gateway: The traffic to and from the Internet routed to the VPC through the public gateway. In a VPC, the public gateway is not exposed to the end user; therefore, static routes are not support for the public gateway.
  • Private Gateway: All the traffic to and from a private network routed to the VPC through the private gateway. For more information, see Section 20.19.5, “Adding a Private Gateway to a VPC”.
  • VPN Gateway: The VPC side of a VPN connection.
  • Site-to-Site VPN Connection: A hardware-based VPN connection between your VPC and your datacenter, home network, or co-location facility. For more information, see Section 20.17.4, “Setting Up a Site-to-Site VPN Connection”.
  • Customer Gateway: The customer side of a VPN Connection. For more information, see Section 20.17.4.1, “Creating and Updating a VPN Customer Gateway”.
  • NAT Instance: An instance that provides Port Address Translation for instances to access the Internet via the public gateway. For more information, see Section 20.19.9, “Enabling or Disabling Static NAT on a VPC”.
Network Architecture in a VPC
In a VPC, the following four basic options of network architectures are present:
  • VPC with a public gateway only
  • VPC with public and private gateways
  • VPC with public and private gateways and site-to-site VPN access
  • VPC with a private gateway only and site-to-site VPN access
Connectivity Options for a VPC
You can connect your VPC to:
  • The Internet through the public gateway.
  • The corporate datacenter by using a site-to-site VPN connection through the VPN gateway.
  • Both the Internet and your corporate datacenter by using both the public gateway and a VPN gateway.
VPC Network Considerations
Consider the following before you create a VPC:
  • A VPC, by default, is created in the enabled state.
  • A VPC can be created in Advance zone only, and can't belong to more than one zone at a time.
  • The default number of VPCs an account can create is 20. However, you can change it by using the max.account.vpcs global parameter, which controls the maximum number of VPCs an account is allowed to create.
  • The default number of tiers an account can create within a VPC is 3. You can configure this number by using the vpc.max.networks parameter.
  • Each tier should have an unique CIDR in the VPC. Ensure that the tier's CIDR should be within the VPC CIDR range.
  • A tier belongs to only one VPC.
  • All network tiers inside the VPC should belong to the same account.
  • When a VPC is created, by default, a SourceNAT IP is allocated to it. The Source NAT IP is released only when the VPC is removed.
  • A public IP can be used for only one purpose at a time. If the IP is a sourceNAT, it cannot be used for StaticNAT or port forwarding.
  • The instances only have a private IP address that you provision. To communicate with the Internet, enable NAT to an instance that you launch in your VPC.
  • Only new networks can be added to a VPC. The maximum number of networks per VPC is limited by the value you specify in the vpc.max.networks parameter. The default value is three.
  • The load balancing service can be supported by only one tier inside the VPC.
  • If an IP address is assigned to a tier:
    • That IP can't be used by more than one tier at a time in the VPC. For example, if you have tiers A and B, and a public IP1, you can create a port forwarding rule by using the IP either for A or B, but not for both.
    • That IP can't be used for StaticNAT, load balancing, or port forwarding rules for another guest network inside the VPC.
  • Remote access VPN is not supported in VPC networks.

20.19.2. Adding a Virtual Private Cloud

When creating the VPC, you simply provide the zone and a set of IP addresses for the VPC network address space. You specify this set of addresses in the form of a Classless Inter-Domain Routing (CIDR) block.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
  4. Click Add VPC. The Add VPC page is displayed as follows:
    add-vpc.png: adding a vpc.
    Provide the following information:
    • Name: A short name for the VPC that you are creating.
    • Description: A brief description of the VPC.
    • Zone: Choose the zone where you want the VPC to be available.
    • Super CIDR for Guest Networks: Defines the CIDR range for all the tiers (guest networks) within a VPC. When you create a tier, ensure that its CIDR is within the Super CIDR value you enter. The CIDR must be RFC1918 compliant.
    • DNS domain for Guest Networks: If you want to assign a special domain name, specify the DNS suffix. This parameter is applied to all the tiers within the VPC. That implies, all the tiers you create in the VPC belong to the same DNS domain. If the parameter is not specified, a DNS domain name is generated automatically.

20.19.3. Adding Tiers

Tiers are distinct locations within a VPC that act as isolated networks, which do not have access to other tiers by default. Tiers are set up on different VLANs that can communicate with each other by using a virtual router. Tiers provide inexpensive, low latency network connectivity to other tiers within the VPC.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPC that you have created for the account is listed in the page.

    Note

    The end users can see their own VPCs, while root and domain admin can see any VPC they are authorized to see.
  4. Click the Configure button of the VPC for which you want to set up tiers.
    The Add new tier dialog is displayed, as follows:
    add-tier.png: adding a tier to a vpc.
    If you have already created tiers, the VPC diagram is displayed. Click Create Tier to add a new tier.
  5. Specify the following:
    All the fields are mandatory.
    • Name: A unique name for the tier you create.
    • Network Offering: The following default network offerings are listed: DefaultIsolatedNetworkOfferingForVpcNetworksNoLB, DefaultIsolatedNetworkOfferingForVpcNetworks
      In a VPC, only one tier can be created by using LB-enabled network offering.
    • Gateway: The gateway for the tier you create. Ensure that the gateway is within the Super CIDR range that you specified while creating the VPC, and is not overlapped with the CIDR of any existing tier within the VPC.
    • Netmask: The netmask for the tier you create.
      For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is 255.255.255.0.
  6. Click OK.
  7. Continue with configuring access control list for the tier.

20.19.4. Configuring Access Control List

Define Network Access Control List (ACL) on the VPC virtual router to control incoming (ingress) and outgoing (egress) traffic between the VPC tiers, and the tiers and Internet. By default, all incoming and outgoing traffic to the guest networks is blocked. To open the ports, you must create a new network ACL. The network ACLs can be created for the tiers only if the NetworkACL service is supported.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ACLs
  5. Select Network ACLs.
    The Network ACLs page is displayed.
  6. Click Add Network ACLs.
    To add an ACL rule, fill in the following fields to specify what kind of network traffic is allowed in this tier.
    • CIDR: The CIDR acts as the Source CIDR for the Ingress rules, and Destination CIDR for the Egress rules. To accept traffic only from or to the IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the incoming traffic. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Protocol: The networking protocol that sources use to send traffic to the tier. The TCP and UDP protocols are typically used for data exchange and end-user communications. The ICMP protocol is typically used to send error messages or network monitoring data.
    • Start Port, End Port (TCP, UDP only): A range of listening ports that are the destination for the incoming traffic. If you are opening a single port, use the same number in both fields.
    • Select Tier: Select the tier for which you want to add this ACL rule.
    • ICMP Type, ICMP Code (ICMP only): The type of message and error code that will be sent.
    • Traffic Type: Select the traffic type you want to apply.
      • Egress: To add an egress rule, select Egress from the Traffic type drop-down box and click Add. This specifies what type of traffic is allowed to be sent out of VM instances in this tier. If no egress rules are specified, all traffic from the tier is allowed out at the VPC virtual router. Once egress rules are specified, only the traffic specified in egress rules and the responses to any traffic that has been allowed in through an ingress rule are allowed out. No egress rule is required for the VMs in a tier to communicate with each other.
      • Ingress: To add an ingress rule, select Ingress from the Traffic type drop-down box and click Add. This specifies what network traffic is allowed into the VM instances in this tier. If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.

      Note

      By default, all incoming and outgoing traffic to the guest networks is blocked. To open the ports, create a new network ACL.
  7. Click Add. The ACL rule is added.
    To view the list of ACL rules you have added, click the desired tier from the Network ACLs page, then select the Network ACL tab.
    network-acl.png: adding, editing, deleting an ACL rule.
    You can edit the tags assigned to the ACL rules and delete the ACL rules you have created. Click the appropriate button in the Actions column.

20.19.5. Adding a Private Gateway to a VPC

A private gateway can be added by the root admin only. The VPC private network has 1:1 relationship with the NIC of the physical network. No gateways with duplicated VLAN and IP are allowed in the same data center.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to configure load balancing rules.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Private Gateways
    • Site-to-Site VPN
    • Network ACLs
  6. Select Private Gateways.
    The Gateways page is displayed.
  7. Click Add new gateway:
    add-new-gateway-vpc.png: adding a private gateway for the VPC.
  8. Specify the following:
    • Physical Network: The physical network you have created in the zone.
    • IP Address: The IP address associated with the VPC gateway.
    • Gateway: The gateway through which the traffic is routed to and from the VPC.
    • Netmask: The netmask associated with the VPC gateway.
    • VLAN: The VLAN associated with the VPC gateway.
    The new gateway appears in the list. You can repeat these steps to add more gateway for this VPC.

20.19.6. Deploying VMs to the Tier

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to deploy the VMs.
    The VPC page is displayed where all the tiers you created are listed.
  5. Click the Add VM button of the tier for which you want to add a VM.
    The Add Instance page is displayed.
    Follow the on-screen instruction to add an instance. For information on adding an instance, see Adding Instances section in the Installation Guide.

20.19.7. Acquiring a New IP Address for a VPC

When you acquire an IP address, all IP addresses are allocated to VPC, not to the guest networks within the VPC. The IPs are associated to the guest network only when the first port-forwarding, load balancing, or Static NAT rule is created for the IP or the network. IP can't be associated to more than one network at a time.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to deploy the VMs.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ACLs
  6. Select IP Addresses.
    The IP Addresses page is displayed.
  7. Click Acquire New IP, and click Yes in the confirmation dialog.
    You are prompted for confirmation because, typically, IP addresses are a limited resource. Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding, load balancing, and static NAT rules.

20.19.8. Releasing an IP Address Alloted to a VPC

The IP address is a limited resource. If you no longer need a particular IP, you can disassociate it from its VPC and return it to the pool of available addresses. An IP address can be released from its tier, only when all the networking ( port forwarding, load balancing, or StaticNAT ) rules are removed for this IP address. The released IP address will still belongs to the same VPC.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC whose IP you want to release.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ACLs
  6. Select IP Addresses.
    The IP Addresses page is displayed.
  7. Click the IP you want to release.
  8. In the Details tab, click the Release IP button release-ip-icon.png: button to release an IP.

20.19.9. Enabling or Disabling Static NAT on a VPC

A static NAT rule maps a public IP address to the private IP address of a VM in a VPC to allow Internet traffic to it. This section tells how to enable or disable static NAT for a particular IP address in a VPC.
If port forwarding rules are already in effect for an IP address, you cannot enable static NAT to that IP.
If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to deploy the VMs.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ACLs
  6. Select IP Addresses.
    The IP Addresses page is displayed.
  7. Click the IP you want to work with.
  8. In the Details tab,click the Static NAT button. enable-disable.png: button to enable Statid NAT. The button toggles between Enable and Disable, depending on whether static NAT is currently enabled for the IP address.
  9. If you are enabling static NAT, a dialog appears as follows:
    select-vmstatic-nat.png: selecting a tier to apply staticNAT.
  10. Select the tier and the destination VM, then click Apply.

20.19.10. Adding Load Balancing Rules on a VPC

A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs that belong to a network tier that provides load balancing service in a VPC. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs within a VPC.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to configure load balancing rules.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ACLs
  6. Select IP Addresses.
    The IP Addresses page is displayed.
  7. Click the IP address for which you want to create the rule, then click the Configuration tab.
  8. In the Load Balancing node of the diagram, click View All.
  9. Select the tier to which you want to apply the rule.

    Note

    In a VPC, the load balancing service is supported only on a single tier.
  10. Specify the following:
    • Name: A name for the load balancer rule.
    • Public Port: The port that receives the incoming traffic to be balanced.
    • Private Port: The port that the VMs will use to receive the traffic.
    • Algorithm. Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms:
      • Round-robin
      • Least connections
      • Source
    • Stickiness. (Optional) Click Configure and choose the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules.
    • Add VMs: Click Add VMs, then select two or more VMs that will divide the load of incoming traffic, and click Apply.
The new load balancing rule appears in the list. You can repeat these steps to add more load balancing rules for this IP address.

20.19.11. Adding a Port Forwarding Rule on a VPC

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC to which you want to deploy the VMs.
    The VPC page is displayed where all the tiers you created are listed in a diagram.
  5. Click the Settings icon.
    The following options are displayed.
    • IP Addresses
    • Gateways
    • Site-to-Site VPN
    • Network ACLs
  6. Choose an existing IP address or acquire a new IP address. Click the name of the IP address in the list.
    The IP Addresses page is displayed.
  7. Click the IP address for which you want to create the rule, then click the Configuration tab.
  8. In the Port Forwarding node of the diagram, click View All.
  9. Select the tier to which you want to apply the rule.
  10. Specify the following:
    • Public Port: The port to which public traffic will be addressed on the IP address you acquired in the previous step.
    • Private Port: The port on which the instance is listening for forwarded public traffic.
    • Protocol: The communication protocol in use between the two ports.
      • TCP
      • UDP
    • Add VM: Click Add VM. Select the name of the instance to which this rule applies, and click Apply.
      You can test the rule by opening an ssh session to the instance.

20.19.12. Removing Tiers

You can remove a tier from a VPC. A removed tier cannot be revoked. When a tier is removed, only the resources of the tier are expunged. All the network rules (port forwarding, load balancing and staticNAT) and the IP addresses associated to the tier are removed. The IP address still be belonging to the same VPC.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPC that you have created for the account is listed in the page.
  4. Click the Configure button of the VPC for which you want to set up tiers.
    The Configure VPC page is displayed. Locate the tier you want to work with.
  5. Click the Remove VPC button:
    remove-tier.png: removing a tier from a vpc.
    Wait for some time for the tier to be removed.

20.19.13. Editing, Restarting, and Removing a Virtual Private Cloud

Note

Ensure that all the tiers are removed before you remove a VPC.
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPC.
    All the VPCs that you have created for the account is listed in the page.
  4. Select the VPC you want to work with.
  5. To remove, click the Remove VPC button remove-vpc.png: button to remove a VPC
    You can edit the name and description of a VPC. To do that, select the VPC, then click the Edit button. edit-icon.png: button to edit a VPC
    To restart a VPC, select the VPC, then click the Restart button. restart-vpc.png: button to restart a VPC i

20.20. Persistent Networks

The network that you can provision without having to deploy any VMs on it is called a persistent network. A persistent network can be part of a VPC or a non-VPC environment.
When you create other types of network, a network is only a database entry until the first VM is created on that network. When the first VM is created, a VLAN ID is assigned and the network is provisioned. Also, when the last VM is destroyed, the VLAN ID is released and the network is no longer available. With the addition of persistent network, you will have the ability to create a network in CloudStack in which physical devices can be deployed without having to run any VMs. Additionally, you can deploy physical devices on that network.
One of the advantages of having a persistent network is that you can create a VPC with a tier consisting of only physical devices. For example, you might create a VPC for a three-tier application, deploy VMs for Web and Application tier, and use physical machines for the Database tier. Another use case is that if you are providing services by using physical hardware, you can define the network as persistent and therefore even if all its VMs are destroyed the services will not be discontinued.

20.20.1. Persistent Network Considerations

  • Persistent network is designed for isolated networks.
  • All default network offerings are non-persistent.
  • A network offering cannot be editable because changing it affects the behavior of the existing networks that were created using this network offering.
  • When you create a guest network, the network offering that you select defines the network persistence. This in turn depends on whether persistent network is enabled in the selected network offering.
  • An existing network can be made persistent by changing its network offering to an offering that has the Persistent option enabled. While setting this property, even if the network has no running VMs, the network is provisioned.
  • An existing network can be made non-persistent by changing its network offering to an offering that has the Persistent option disabled. If the network has no running VMs, during the next network garbage collection run the network is shut down.
  • When the last VM on a network is destroyed, the network garbage collector checks if the network offering associated with the network is persistent, and shuts down the network only if it is non-persistent.

20.20.2. Creating a Persistent Guest Network

To create a persistent network, perform the following:
  1. Create a network offering with the Persistent option enabled.
    See the Administration Guide.
  2. Select Network from the left navigation pane.
  3. Select the guest network that you want to offer this network service to.
  4. Click the Edit button.
  5. From the Network Offering drop-down, select the persistent network offering you have just created.
  6. Click OK.

Chapter 21. Working with System Virtual Machines

CloudStack uses several types of system virtual machines to perform tasks in the cloud. In general CloudStack manages these system VMs and creates, starts, and stops them as needed based on scale and immediate needs. However, the administrator should be aware of them and their roles to assist in debugging issues.

Note

You can configure the system.vm.random.password parameter to create a random system VM password to ensure higher security. If you reset the value for system.vm.random.password to true and restart the Management Server, a random password is generated and stored encrypted in the database. You can view the decrypted password under the system.vm.password global parameter on the CloudStack UI or by calling the listConfigurations API.

21.1. The System VM Template

The System VMs come from a single template. The System VM has the following characteristics:
  • Debian 6.0 ("Squeeze"), 2.6.32 kernel with the latest security patches from the Debian security APT repository
  • Has a minimal set of packages installed thereby reducing the attack surface
  • 32-bit for enhanced performance on Xen/VMWare
  • pvops kernel with Xen PV drivers, KVM virtio drivers, and VMware tools for optimum performance on all hypervisors
  • Xen tools inclusion allows performance monitoring
  • Latest versions of HAProxy, iptables, IPsec, and Apache from debian repository ensures improved security and speed
  • Latest version of JRE from Sun/Oracle ensures improved security and speed

21.2. Multiple System VM Support for VMware

Every CloudStack zone has single System VM for template processing tasks such as downloading templates, uploading templates, and uploading ISOs. In a zone where VMware is being used, additional System VMs can be launched to process VMware-specific tasks such as taking snapshots and creating private templates. The CloudStack management server launches additional System VMs for VMware-specific tasks as the load increases. The management server monitors and weights all commands sent to these System VMs and performs dynamic load balancing and scaling-up of more System VMs.

21.3. Console Proxy

The Console Proxy is a type of System Virtual Machine that has a role in presenting a console view via the web UI. It connects the user’s browser to the VNC port made available via the hypervisor for the console of the guest. Both the administrator and end user web UIs offer a console connection.
Clicking a console icon brings up a new window. The AJAX code downloaded into that window refers to the public IP address of a console proxy VM. There is exactly one public IP address allocated per console proxy VM. The AJAX application connects to this IP. The console proxy then proxies the connection to the VNC port for the requested VM on the Host hosting the guest.

Note

The hypervisors will have many ports assigned to VNC usage so that multiple VNC sessions can occur simultaneously.
There is never any traffic to the guest virtual IP, and there is no need to enable VNC within the guest.
The console proxy VM will periodically report its active session count to the Management Server. The default reporting interval is five seconds. This can be changed through standard Management Server configuration with the parameter consoleproxy.loadscan.interval.
Assignment of guest VM to console proxy is determined by first determining if the guest VM has a previous session associated with a console proxy. If it does, the Management Server will assign the guest VM to the target Console Proxy VM regardless of the load on the proxy VM. Failing that, the first available running Console Proxy VM that has the capacity to handle new sessions is used.
Console proxies can be restarted by administrators but this will interrupt existing console sessions for users.

21.3.1. Using a SSL Certificate for the Console Proxy

The console viewing functionality uses a dynamic DNS service under the domain name realhostip.com to assist in providing SSL security to console sessions. The console proxy is assigned a public IP address. In order to avoid browser warnings for mismatched SSL certificates, the URL for the new console window is set to the form of https://aaa-bbb-ccc-ddd.realhostip.com. You will see this URL during console session creation. CloudStack includes the realhostip.com SSL certificate in the console proxy VM. Of course, CloudStack cannot know about the DNS A records for our customers' public IPs prior to shipping the software. CloudStack therefore runs a dynamic DNS server that is authoritative for the realhostip.com domain. It maps the aaa-bbb-ccc-ddd part of the DNS name to the IP address aaa.bbb.ccc.ddd on lookups. This allows the browser to correctly connect to the console proxy's public IP, where it then expects and receives a SSL certificate for realhostip.com, and SSL is set up without browser warnings.

21.3.2. Changing the Console Proxy SSL Certificate and Domain

If the administrator prefers, it is possible for the URL of the customer's console session to show a domain other than realhostip.com. The administrator can customize the displayed domain by selecting a different domain and uploading a new SSL certificate and private key. The domain must run a DNS service that is capable of resolving queries for addresses of the form aaa-bbb-ccc-ddd.your.domain to an IPv4 IP address in the form aaa.bbb.ccc.ddd, for example, 202.8.44.1. To change the console proxy domain, SSL certificate, and private key:
  1. Set up dynamic name resolution or populate all possible DNS names in your public IP range into your existing DNS server with the format aaa-bbb-ccc-ddd.company.com -> aaa.bbb.ccc.ddd.
  2. Generate the private key and certificate signing request (CSR). When you are using openssl to generate private/public key pairs and CSRs, for the private key that you are going to paste into the CloudStack UI, be sure to convert it into PKCS#8 format.
    1. Generate a new 2048-bit private key
      openssl genrsa -des3 -out yourprivate.key 2048
    2. Generate a new certificate CSR
      openssl req -new -key yourprivate.key -out yourcertificate.csr
    3. Head to the website of your favorite trusted Certificate Authority, purchase an SSL certificate, and submit the CSR. You should receive a valid certificate in return
    4. Convert your private key format into PKCS#8 encrypted format.
      openssl pkcs8 -topk8 -in yourprivate.key -out yourprivate.pkcs8.encrypted.key
    5. Convert your PKCS#8 encrypted private key into the PKCS#8 format that is compliant with CloudStack
      openssl pkcs8 -in yourprivate.pkcs8.encrypted.key -out yourprivate.pkcs8.key
  3. In the Update SSL Certificate screen of the CloudStack UI, paste the following:
    • The certificate you've just generated.
    • The private key you've just generated.
    • The desired new domain name; for example, company.com
    updatessl.png: Updating Console Proxy SSL Certificate
  4. The desired new domain name; for example, company.com
    This stops all currently running console proxy VMs, then restarts them with the new certificate and key. Users might notice a brief interruption in console availability.
The Management Server generates URLs of the form "aaa-bbb-ccc-ddd.company.com" after this change is made. The new console requests will be served with the new DNS domain name, certificate, and key.

21.4. Virtual Router

The virtual router is a type of System Virtual Machine. The virtual router is one of the most frequently used service providers in CloudStack. The end user has no direct access to the virtual router. Users can ping the virtual router and take actions that affect it (such as setting up port forwarding), but users do not have SSH access into the virtual router.
There is no mechanism for the administrator to log in to the virtual router. Virtual routers can be restarted by administrators, but this will interrupt public network access and other services for end users. A basic test in debugging networking issues is to attempt to ping the virtual router from a guest VM. Some of the characteristics of the virtual router are determined by its associated system service offering..

21.4.1. Configuring the Virtual Router

You can set the following:
  • IP range
  • Supported network services
  • Default domain name for the network serviced by the virtual router
  • Gateway IP address
  • How often CloudStack fetches network usage statistics from CloudStack virtual routers. If you want to collect traffic metering data from the virtual router, set the global configuration parameter router.stats.interval. If you are not using the virtual router to gather network usage statistics, set it to 0.

21.4.2. Upgrading a Virtual Router with System Service Offerings

When CloudStack creates a virtual router, it uses default settings which are defined in a default system service offering. See Section 13.2, “System Service Offerings”. All the virtual routers in a single guest network use the same system service offering. You can upgrade the capabilities of the virtual router by creating and applying a custom system service offering.
  1. Define your custom system service offering. See Section 13.2.1, “Creating a New System Service Offering”. In System VM Type, choose Domain Router.
  2. Associate the system service offering with a network offering. See "Creating Network Offerings" in the Administrator's Guide.
  3. Apply the network offering to the network where you want the virtual routers to use the new system service offering. If this is a new network, follow the steps in Adding an Additional Guest Network on page 66. To change the service offering for existing virtual routers, follow the steps in Section 20.6.2, “Changing the Network Offering on a Guest Network”.

21.4.3. Best Practices for Virtual Routers

  • WARNING: Restarting a virtual router from a hypervisor console deletes all the iptables rules. To work around this issue, stop the virtual router and start it from the CloudStack UI.
  • WARNING: Do not use the destroyRouter API when only one router is available in the network, because restartNetwork API with the cleanup=false parameter can't recreate it later. If you want to destroy and recreate the single router available in the network, use the restartNetwork API with the cleanup=true parameter.

21.5. Secondary Storage VM

In addition to the hosts, CloudStack’s Secondary Storage VM mounts and writes to secondary storage.
Submissions to secondary storage go through the Secondary Storage VM. The Secondary Storage VM can retrieve templates and ISO images from URLs using a variety of protocols.
The secondary storage VM provides a background task that takes care of a variety of secondary storage activities: downloading a new template to a Zone, copying templates between Zones, and snapshot backups.
The administrator can log in to the secondary storage VM if needed.

Chapter 22. System Reliability and High Availability

22.1. HA for Management Server

The CloudStack Management Server should be deployed in a multi-node configuration such that it is not susceptible to individual server failures. The Management Server itself (as distinct from the MySQL database) is stateless and may be placed behind a load balancer.
Normal operation of Hosts is not impacted by an outage of all Management Serves. All guest VMs will continue to work.
When the Management Server is down, no new VMs can be created, and the end user and admin UI, API, dynamic load distribution, and HA will cease to work.

22.2. Management Server Load Balancing

CloudStack can use a load balancer to provide a virtual IP for multiple Management Servers. The administrator is responsible for creating the load balancer rules for the Management Servers. The application requires persistence or stickiness across multiple sessions. The following chart lists the ports that should be load balanced and whether or not persistence is required.
Even if persistence is not required, enabling it is permitted.
Source Port
Destination Port
Protocol
Persistence Required?
80 or 443
8080 (or 20400 with AJP)
HTTP (or AJP)
Yes
8250
8250
TCP
Yes
8096
8096
HTTP
No
In addition to above settings, the administrator is responsible for setting the 'host' global config value from the management server IP to load balancer virtual IP address. If the 'host' value is not set to the VIP for Port 8250 and one of your management servers crashes, the UI is still available but the system VMs will not be able to contact the management server.

22.3. HA-Enabled Virtual Machines

The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, CloudStack detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. CloudStack has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster.
HA features work with iSCSI or NFS primary storage. HA with local storage is not supported.

22.4. HA for Hosts

The user can specify a virtual machine as HA-enabled. By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. When an HA-enabled VM crashes, CloudStack detects the crash and restarts the VM automatically within the same Availability Zone. HA is never performed across different Availability Zones. CloudStack has a conservative policy towards restarting VMs and ensures that there will never be two instances of the same VM running at the same time. The Management Server attempts to start the VM on another Host in the same cluster.
HA features work with iSCSI or NFS primary storage. HA with local storage is not supported.

22.4.1. Dedicated HA Hosts

One or more hosts can be designated for use only by HA-enabled VMs that are restarting due to a host failure. Setting up a pool of such dedicated HA hosts as the recovery destination for all HA-enabled VMs is useful to:
  • Make it easier to determine which VMs have been restarted as part of the CloudStack high-availability function. If a VM is running on a dedicated HA host, then it must be an HA-enabled VM whose original host failed. (With one exception: It is possible for an administrator to manually migrate any VM to a dedicated HA host.).
  • Keep HA-enabled VMs from restarting on hosts which may be reserved for other purposes.
The dedicated HA option is set through a special host tag when the host is created. To allow the administrator to dedicate hosts to only HA-enabled VMs, set the global configuration variable ha.tag to the desired tag (for example, "ha_host"), and restart the Management Server. Enter the value in the Host Tags field when adding the host(s) that you want to dedicate to HA-enabled VMs.

Note

If you set ha.tag, be sure to actually use that tag on at least one host in your cloud. If the tag specified in ha.tag is not set for any host in the cloud, the HA-enabled VMs will fail to restart after a crash.

22.5. Primary Storage Outage and Data Loss

When a primary storage outage occurs the hypervisor immediately stops all VMs stored on that storage device. Guests that are marked for HA will be restarted as soon as practical when the primary storage comes back on line. With NFS, the hypervisor may allow the virtual machines to continue running depending on the nature of the issue. For example, an NFS hang will cause the guest VMs to be suspended until storage connectivity is restored.Primary storage is not designed to be backed up. Individual volumes in primary storage can be backed up using snapshots.

22.6. Secondary Storage Outage and Data Loss

For a Zone that has only one secondary storage server, a secondary storage outage will have feature level impact to the system but will not impact running guest VMs. It may become impossible to create a VM with the selected template for a user. A user may also not be able to save snapshots or examine/restore saved snapshots. These features will automatically be available when the secondary storage comes back online.
Secondary storage data loss will impact recently added user data including templates, snapshots, and ISO images. Secondary storage should be backed up periodically. Multiple secondary storage servers can be provisioned within each zone to increase the scalability of the system.

Chapter 23. Managing the Cloud

23.1. Using Tags to Organize Resources in the Cloud

A tag is a key-value pair that stores metadata about a resource in the cloud. Tags are useful for categorizing resources. For example, you can tag a user VM with a value that indicates the user's city of residence. In this case, the key would be "city" and the value might be "Toronto" or "Tokyo." You can then request CloudStack to find all resources that have a given tag; for example, VMs for users in a given city.
You can tag a user virtual machine, volume, snapshot, guest network, template, ISO, firewall rule, port forwarding rule, public IP address, security group, load balancer rule, project, VPC, network ACL, or static route. You can not tag a remote access VPN.
You can work with tags through the UI or through the API commands createTags, deleteTags, and listTags. You can define multiple tags for each resource. There is no limit on the number of tags you can define. Each tag can be up to 255 characters long. Users can define tags on the resources they own, and administrators can define tags on any resources in the cloud.
An optional input parameter, "tags," exists on many of the list* API commands. The following example shows how to use this new parameter to find all the volumes having tag region=canada OR tag city=Toronto:
command=listVolumes
				&listAll=true
				&tags[0].key=region
				&tags[0].value=canada
				&tags[1].key=city
				&tags[1].value=Toronto
The following API commands have the "tags" input parameter:
  • listVirtualMachines
  • listVolumes
  • listSnapshots
  • listNetworks
  • listTemplates
  • listIsos
  • listFirewallRules
  • listPortForwardingRules
  • listPublicIpAddresses
  • listSecurityGroups
  • listLoadBalancerRules
  • listProjects
  • listVPCs
  • listNetworkACLs
  • listStaticRoutes

23.2. Changing the Database Configuration

The CloudStack Management Server stores database configuration information (e.g., hostname, port, credentials) in the file /etc/cloud/management/db.properties. To effect a change, edit this file on each Management Server, then restart the Management Server.

23.3. Changing the Database Password

You may need to change the password for the MySQL account used by CloudStack. If so, you'll need to change the password in MySQL, and then add the encrypted password to /etc/cloud/management/db.properties.
  1. Before changing the password, you'll need to stop CloudStack's management server and the usage engine if you've deployed that component.
    # service cloud-management stop
    # service cloud-usage stop
  2. Next, you'll update the password for the CloudStack user on the MySQL server.
    # mysql -u root -p
    At the MySQL shell, you'll change the password and flush privileges:
    update mysql.user set password=PASSWORD("newpassword123") where User='cloud';
    flush privileges;
    quit;
  3. The next step is to encrypt the password and copy the encrypted password to CloudStack's database configuration (/etc/cloud/management/db.properties).
    # java -classpath /usr/share/java/cloud-jasypt-1.8.jar \ org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh \ input="newpassword123" password="`cat /etc/cloud/management/key`" \ verbose=false 

    File encryption type

    Note that this is for the file encryption type. If you're using the web encryption type then you'll use password="management_server_secret_key"
  4. Now, you'll update /etc/cloud/management/db.properties with the new ciphertext. Open /etc/cloud/management/db.properties in a text editor, and update these parameters:
    db.cloud.password=ENC(encrypted_password_from_above) 
    db.usage.password=ENC(encrypted_password_from_above)
    
  5. After copying the new password over, you can now start CloudStack (and the usage engine, if necessary).
            # service cloud-management start
            # service cloud-usage start
    

23.4. Administrator Alerts

The system provides alerts and events to help with the management of the cloud. Alerts are notices to an administrator, generally delivered by e-mail, notifying the administrator that an error has occurred in the cloud. Alert behavior is configurable.
Events track all of the user and administrator actions in the cloud. For example, every guest VM start creates an associated event. Events are stored in the Management Server’s database.
Emails will be sent to administrators under the following circumstances:
  • The Management Server cluster runs low on CPU, memory, or storage resources
  • The Management Server loses heartbeat from a Host for more than 3 minutes
  • The Host cluster runs low on CPU, memory, or storage resources

23.5. Customizing the Network Domain Name

The root administrator can optionally assign a custom DNS suffix at the level of a network, account, domain, zone, or entire CloudStack installation, and a domain administrator can do so within their own domain. To specify a custom domain name and put it into effect, follow these steps.
  1. Set the DNS suffix at the desired scope
    • At the network level, the DNS suffix can be assigned through the UI when creating a new network, as described in Section 20.6.1, “Adding an Additional Guest Network” or with the updateNetwork command in the CloudStack API.
    • At the account, domain, or zone level, the DNS suffix can be assigned with the appropriate CloudStack API commands: createAccount, editAccount, createDomain, editDomain, createZone, or editZone.
    • At the global level, use the configuration parameter guest.domain.suffix. You can also use the CloudStack API command updateConfiguration. After modifying this global configuration, restart the Management Server to put the new setting into effect.
  2. To make the new DNS suffix take effect for an existing network, call the CloudStack API command updateNetwork. This step is not necessary when the DNS suffix was specified while creating a new network.
The source of the network domain that is used depends on the following rules.
  • For all networks, if a network domain is specified as part of a network's own configuration, that value is used.
  • For an account-specific network, the network domain specified for the account is used. If none is specified, the system looks for a value in the domain, zone, and global configuration, in that order.
  • For a domain-specific network, the network domain specified for the domain is used. If none is specified, the system looks for a value in the zone and global configuration, in that order.
  • For a zone-specific network, the network domain specified for the zone is used. If none is specified, the system looks for a value in the global configuration.

23.6. Stopping and Restarting the Management Server

The root administrator will need to stop and restart the Management Server from time to time.
For example, after changing a global configuration parameter, a restart is required. If you have multiple Management Server nodes, restart all of them to put the new parameter value into effect consistently throughout the cloud..
To stop the Management Server, issue the following command at the operating system prompt on the Management Server node:
# service cloud-management stop
To start the Management Server:
# service cloud-management start
To stop the Management Server:
# service cloud-management stop

Chapter 24. CloudStack API

The CloudStack API is a low level API that has been used to implement the CloudStack web UIs. It is also a good basis for implementing other popular APIs such as EC2/S3 and emerging DMTF standards.
Many CloudStack API calls are asynchronous. These will return a Job ID immediately when called. This Job ID can be used to query the status of the job later. Also, status calls on impacted resources will provide some indication of their state.
The API has a REST-like query basis and returns results in XML or JSON.

24.1. Provisioning and Authentication API

CloudStack expects that a customer will have their own user provisioning infrastructure. It provides APIs to integrate with these existing systems where the systems call out to CloudStack to add/remove users..
CloudStack supports pluggable authenticators. By default, CloudStack assumes it is provisioned with the user’s password, and as a result authentication is done locally. However, external authentication is possible as well. For example, see Using an LDAP Server for User Authentication.

24.2. Allocators

CloudStack enables administrators to write custom allocators that will choose the Host to place a new guest and the storage host from which to allocate guest virtual disk images.

24.3. User Data and Meta Data

CloudStack provides API access to attach user data to a deployed VM. Deployed VMs also have access to instance metadata via the virtual router.
User data can be accessed once the IP address of the virtual router is known. Once the IP address is known, use the following steps to access the user data:
  1. Run the following command to find the virtual router.
    # cat /var/lib/dhclient/dhclient-eth0.leases | grep dhcp-server-identifier | tail -1
  2. Access user data by running the following command using the result of the above command
    # curl http://10.1.1.1/latest/user-data
Meta Data can be accessed similarly, using a URL of the form http://10.1.1.1/latest/meta-data/{metadata type}. (For backwards compatibility, the previous URL http://10.1.1.1/latest/{metadata type} is also supported.) For metadata type, use one of the following:
  • service-offering. A description of the VMs service offering
  • availability-zone. The Zone name
  • local-ipv4. The guest IP of the VM
  • local-hostname. The hostname of the VM
  • public-ipv4. The first public IP for the router. (E.g. the first IP of eth2)
  • public-hostname. This is the same as public-ipv4
  • instance-id. The instance name of the VM

Chapter 25. Tuning

This section provides tips on how to improve the performance of your cloud.

25.1. Performance Monitoring

Host and guest performance monitoring is available to end users and administrators. This allows the user to monitor their utilization of resources and determine when it is appropriate to choose a more powerful service offering or larger disk.

25.2. Increase Management Server Maximum Memory

If the Management Server is subject to high demand, the default maximum JVM memory allocation can be insufficient. To increase the memory:
  1. Edit the Tomcat configuration file:
    /etc/cloud/management/tomcat6.conf
  2. Change the command-line parameter -XmxNNNm to a higher value of N.
    For example, if the current value is -Xmx128m, change it to -Xmx1024m or higher.
  3. To put the new setting into effect, restart the Management Server.
    # service cloud-management restart
For more information about memory issues, see "FAQ: Memory" at Tomcat Wiki.

25.3. Set Database Buffer Pool Size

It is important to provide enough memory space for the MySQL database to cache data and indexes:
  1. Edit the Tomcat configuration file:
    /etc/my.cnf
  2. Insert the following line in the [mysqld] section, below the datadir line. Use a value that is appropriate for your situation. We recommend setting the buffer pool at 40% of RAM if MySQL is on the same server as the management server or 70% of RAM if MySQL has a dedicated server. The following example assumes a dedicated server with 1024M of RAM.
    innodb_buffer_pool_size=700M
  3. Restart the MySQL service.
    # service mysqld restart
For more information about the buffer pool, see "The InnoDB Buffer Pool" at MySQL Reference Manual.

25.4. Set and Monitor Total VM Limits per Host

The CloudStack administrator should monitor the total number of VM instances in each cluster, and disable allocation to the cluster if the total is approaching the maximum that the hypervisor can handle. Be sure to leave a safety margin to allow for the possibility of one or more hosts failing, which would increase the VM load on the other hosts as the VMs are automatically redeployed. Consult the documentation for your chosen hypervisor to find the maximum permitted number of VMs per host, then use CloudStack global configuration settings to set this as the default limit. Monitor the VM activity in each cluster at all times. Keep the total number of VMs below a safe level that allows for the occasional host failure. For example, if there are N hosts in the cluster, and you want to allow for one host in the cluster to be down at any given time, the total number of VM instances you can permit in the cluster is at most (N-1) * (per-host-limit). Once a cluster reaches this number of VMs, use the CloudStack UI to disable allocation of more VMs to the cluster.

25.5. Configure XenServer dom0 Memory

Configure the XenServer dom0 settings to allocate more memory to dom0. This can enable XenServer to handle larger numbers of virtual machines. We recommend 2940 MB of RAM for XenServer dom0. For instructions on how to do this, see Citrix Knowledgebase Article.The article refers to XenServer 5.6, but the same information applies to XenServer 6

Chapter 26. Troubleshooting

26.1. Events

An event is essentially a significant or meaningful change in the state of both virtual and physical resources associated with a cloud environment. Events are used by monitoring systems, usage and billing systems, or any other event-driven workflow systems to discern a pattern and make the right business decision. In CloudStack an event could be a state change of virtual or psychical resources, an action performed by an user (action events), or policy based events (alerts).

26.1.1. Event Logs

There are two types of events logged in the CloudStack Event Log. Standard events log the success or failure of an event and can be used to identify jobs or processes that have failed. There are also long running job events. Events for asynchronous jobs log when a job is scheduled, when it starts, and when it completes. Other long running synchronous jobs log when a job starts, and when it completes. Long running synchronous and asynchronous event logs can be used to gain more information on the status of a pending job or can be used to identify a job that is hanging or has not started. The following sections provide more information on these events..

26.1.2. Event Notification

Event notification framework provides a means for the Management Server components to publish and subscribe to CloudStack events. Event notification is achieved by implementing the concept of event bus abstraction in the Management Server. An event bus is introduced in the Management Server that allows the CloudStackcomponents and extension plug-ins to subscribe to the events by using the Advanced Message Queuing Protocol (AMQP) client. In CloudStack, a default implementation of event bus is provided as a plug-in that uses the RabbitMQ AMQP client. The AMQP client pushes the published events to a compatible AMQP server. Therefore all the CloudStack events are published to an exchange in the AMQP server.
A new event for state change, resource state change, is introduced as part of Event notification framework. Every resource, such as user VM, volume, NIC, network, public IP, snapshot, and template, is associated with a state machine and generates events as part of the state change. That implies that a change in the state of a resource results in a state change event, and the event is published in the corresponding state machine on the event bus. All the CloudStack events (alerts, action events, usage events) and the additional category of resource state change events, are published on to the events bus.
Use Cases
The following are some of the use cases:
  • Usage or Billing Engines: A third-party cloud usage solution can implement a plug-in that can connects to CloudStack to subscribe to CloudStack events and generate usage data. The usage data is consumed by their usage software.
  • AMQP plug-in can place all the events on the a message queue, then a AMQP message broker can provide topic-based notification to the subscribers.
  • Publish and Subscribe notification service can be implemented as a pluggable service in CloudStack that can provide rich set of APIs for event notification, such as topics-based subscription and notification. Additionally, the pluggable service can deal with multi-tenancy, authentication, and authorization issues.
Configuration
As a CloudStack administrator, perform the following one-time configuration to enable event notification framework. At run time no changes can control the behaviour.
  1. Open 'componentContext.xml.
  2. Define a bean named eventNotificationBus as follows:
    • name : Specify a name for the bean.
    • server : The name or the IP address of the RabbitMQ AMQP server.
    • port : The port on which RabbitMQ server is running.
    • username : The username associated with the account to access the RabbitMQ server.
    • password : The password associated with the username of the account to access the RabbitMQ server.
    • exchange : The exchange name on the RabbitMQ server where CloudStack events are published.
      A sample bean is given below:
      <bean id="eventNotificationBus" class="org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus">
          <property name="name" value="eventNotificationBus"/>
          <property name="server" value="127.0.0.1"/>
          <property name="port" value="5672"/>
          <property name="username" value="guest"/>
          <property name="password" value="guest"/>
         <property name="exchange" value="cloudstack-events"/>
         </bean>
      The eventNotificationBus bean represents the org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus class.
  3. Restart the Management Server.

26.1.3. Standard Events

The events log records three types of standard events.
  • INFO. This event is generated when an operation has been successfully performed.
  • WARN. This event is generated in the following circumstances.
    • When a network is disconnected while monitoring a template download.
    • When a template download is abandoned.
    • When an issue on the storage server causes the volumes to fail over to the mirror storage server.
  • ERROR. This event is generated when an operation has not been successfully performed

26.1.4. Long Running Job Events

The events log records three types of standard events.
  • INFO. This event is generated when an operation has been successfully performed.
  • WARN. This event is generated in the following circumstances.
    • When a network is disconnected while monitoring a template download.
    • When a template download is abandoned.
    • When an issue on the storage server causes the volumes to fail over to the mirror storage server.
  • ERROR. This event is generated when an operation has not been successfully performed

26.1.5. Event Log Queries

Database logs can be queried from the user interface. The list of events captured by the system includes:
  • Virtual machine creation, deletion, and on-going management operations
  • Virtual router creation, deletion, and on-going management operations
  • Template creation and deletion
  • Network/load balancer rules creation and deletion
  • Storage volume creation and deletion
  • User login and logout

26.2. Working with Server Logs

The CloudStack Management Server logs all web site, middle tier, and database activities for diagnostics purposes in /var/log/cloudstack/management/. The CloudStack logs a variety of error messages. We recommend this command to find the problematic output in the Management Server log:.

Note

When copying and pasting a command, be sure the command has pasted as a single line before executing. Some document viewers may introduce unwanted line breaks in copied text.
        grep -i -E 'exception|unable|fail|invalid|leak|warn|error' /var/log/cloudstack/management/management-server.log
The CloudStack processes requests with a Job ID. If you find an error in the logs and you are interested in debugging the issue you can grep for this job ID in the management server log. For example, suppose that you find the following ERROR message:
        2010-10-04 13:49:32,595 ERROR [cloud.vm.UserVmManagerImpl] (Job-Executor-11:job-1076) Unable to find any host for [User|i-8-42-VM-untagged]
Note that the job ID is 1076. You can track back the events relating to job 1076 with the following grep:
        grep "job-1076)" management-server.log
The CloudStack Agent Server logs its activities in /var/log/cloudstack/agent/.

26.3. Data Loss on Exported Primary Storage

Symptom
Loss of existing data on primary storage which has been exposed as a Linux NFS server export on an iSCSI volume.
Cause
It is possible that a client from outside the intended pool has mounted the storage. When this occurs, the LVM is wiped and all data in the volume is lost
Solution
When setting up LUN exports, restrict the range of IP addresses that are allowed access by specifying a subnet mask. For example:
echo “/export 192.168.1.0/24(rw,async,no_root_squash)” > /etc/exports
Adjust the above command to suit your deployment needs.
More Information
See the export procedure in the "Secondary Storage" section of the CloudStack Installation Guide

26.4. Recovering a Lost Virtual Router

Symptom
A virtual router is running, but the host is disconnected. A virtual router no longer functions as expected.
Cause
The Virtual router is lost or down.
Solution
If you are sure that a virtual router is down forever, or no longer functions as expected, destroy it. You must create one afresh while keeping the backup router up and running (it is assumed this is in a redundant router setup):
  • Force stop the router. Use the stopRouter API with forced=true parameter to do so.
  • Before you continue with destroying this router, ensure that the backup router is running. Otherwise the network connection will be lost.
  • Destroy the router by using the destroyRouter API.
Recreate the missing router by using the restartNetwork API with cleanup=false parameter. For more information about redundant router setup, see Creating a New Network Offering.
For more information about the API syntax, see the API Reference at http://docs.cloudstack.org/CloudStack_Documentation/API_Reference%3A_CloudStackAPI Reference.

26.5. Maintenance mode not working on vCenter

Symptom
Host was placed in maintenance mode, but still appears live in vCenter.
Cause
The CloudStack administrator UI was used to place the host in scheduled maintenance mode. This mode is separate from vCenter's maintenance mode.
Solution
Use vCenter to place the host in maintenance mode.

26.6. Unable to deploy VMs from uploaded vSphere template

Symptom
When attempting to create a VM, the VM will not deploy.
Cause
If the template was created by uploading an OVA file that was created using vSphere Client, it is possible the OVA contained an ISO image. If it does, the deployment of VMs from the template will fail.
Solution
Remove the ISO and re-upload the template.

26.7. Unable to power on virtual machine on VMware

Symptom
Virtual machine does not power on. You might see errors like:
  • Unable to open Swap File
  • Unable to access a file since it is locked
  • Unable to access Virtual machine configuration
Cause
A known issue on VMware machines. ESX hosts lock certain critical virtual machine files and file systems to prevent concurrent changes. Sometimes the files are not unlocked when the virtual machine is powered off. When a virtual machine attempts to power on, it can not access these critical files, and the virtual machine is unable to power on.
Solution
See the following:

26.8. Load balancer rules fail after changing network offering

Symptom
After changing the network offering on a network, load balancer rules stop working.
Cause
Load balancing rules were created while using a network service offering that includes an external load balancer device such as NetScaler, and later the network service offering changed to one that uses the CloudStack virtual router.
Solution
Create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function.

Chapter 27. Introduction to the CloudStack API

27.1. Roles

The CloudStack API supports three access roles:
  1. Root Admin. Access to all features of the cloud, including both virtual and physical resource management.
  2. Domain Admin. Access to only the virtual resources of the clouds that belong to the administrator’s domain.
  3. User. Access to only the features that allow management of the user’s virtual instances, storage, and network.

27.2. API Reference Documentation

You can find all the API reference documentation at the below site:

27.3. Getting Started

To get started using the CloudStack API, you should have the following:
  • URL of the CloudStack server you wish to integrate with.
  • Both the API Key and Secret Key for an account. This should have been generated by the administrator of the cloud instance and given to you.
  • Familiarity with HTTP GET/POST and query strings.
  • Knowledge of either XML or JSON.
  • Knowledge of a programming language that can generate HTTP requests; for example, Java or PHP.

Chapter 28. What's New in the API?

The following describes any new major features of each CloudStack version as it applies to API usage.

28.1. What's New in the API for 4.1

28.1.1. Reconfiguring Physical Networks in VMs

CloudStack provides the ability to move VMs between networks and reconfigure a VM's network. You can remove a VM from a physical network and add to a new physical network. You can also change the default physical network of a virtual machine. With this functionality, hybrid or traditional server loads can be accommodated with ease.
This feature is supported on XenServer and KVM hypervisors.
The following APIs have been added to support this feature. These API calls can function only while the VM is in running or stopped state.

28.1.1.1. addNicToVirtualMachine

The addNicToVirtualMachine API adds a new NIC to the specified VM on a selected network.
parameter
description
Value
virtualmachineid
The unique ID of the VM to which the NIC is to be added.
true
networkid
The unique ID of the network the NIC that you add should apply to.
true
ipaddress
The IP address of the VM on the network.
false
The network and VM must reside in the same zone. Two VMs with the same name cannot reside in the same network. Therefore, adding a second VM that duplicates a name on a network will fail.

28.1.1.2. removeNicFromVirtualMachine

The removeNicFromVirtualMachine API removes a NIC from the specified VM on a selected network.
parameter
description
Value
virtualmachineid
The unique ID of the VM from which the NIC is to be removed.
true
nicid
The unique ID of the NIC that you want to remove.
true
Removing the default NIC is not allowed.

28.1.1.3. updateDefaultNicForVirtualMachine

The updateDefaultNicForVirtualMachine API updates the specified NIC to be the default one for a selected VM.
parameter
description
Value
virtualmachineid
The unique ID of the VM for which you want to specify the default NIC.
true
nicid
The unique ID of the NIC that you want to set as the default one.
true

28.1.2. IPv6 Support in CloudStack

CloudStacksupports Internet Protocol version 6 (IPv6), the recent version of the Internet Protocol (IP) that defines routing the network traffic. IPv6 uses a 128-bit address that exponentially expands the current address space that is available to the users. IPv6 addresses consist of eight groups of four hexadecimal digits separated by colons, for example, 5001:0dt8:83a3:1012:1000:8s2e:0870:7454. CloudStack supports IPv6 for public IPs in shared networks. With IPv6 support, VMs in shared networks can obtain both IPv4 and IPv6 addresses from the DHCP server. You can deploy VMs either in a IPv6 or IPv4 network, or in a dual network environment. If IPv6 network is used, the VM generates a link-local IPv6 address by itself, and receives a stateful IPv6 address from the DHCPv6 server.
IPv6 is supported only on KVM and XenServer hypervisors. The IPv6 support is only an experimental feature.
Here's the sequence of events when IPv6 is used:
  1. The administrator creates an IPv6 shared network in an advanced zone.
  2. The user deploys a VM in an IPv6 shared network.
  3. The user VM generates an IPv6 link local address by itself, and gets an IPv6 global or site local address through DHCPv6.
    For information on API changes, see Section 28.1.5, “Changed API Commands in 4.1”.

28.1.2.1. Prerequisites and Guidelines

Consider the following:
  • CIDR size must be 64 for IPv6 networks.
  • The DHCP client of the guest VMs should support generating DUID based on Link-layer Address (DUID- LL). DUID-LL derives from the MAC address of guest VMs, and therefore the user VM can be identified by using DUID. See Dynamic Host Configuration Protocol for IPv6 for more information.
  • The gateway of the guest network generates Router Advisement and Response messages to Router Solicitation. The M (Managed Address Configuration) flag of Router Advisement should enable stateful IP address configuration. Set the M flag to where the end nodes receive their IPv6 addresses from the DHCPv6 server as opposed to the router or switch.

    Note

    The M flag is the 1-bit Managed Address Configuration flag for Router Advisement. When set, Dynamic Host Configuration Protocol (DHCPv6) is available for address configuration in addition to any IPs set by using stateless address auto-configuration.
  • Use the System VM template exclusively designed to support IPv6. Download the System VM template from http://cloudstack.apt-get.eu/systemvm/.
  • The concept of Default Network applies to IPv6 networks. However, unlike IPv4 CloudStack does not control the routing information of IPv6 in shared network; the choice of Default Network will not affect the routing in the user VM.
  • In a multiple shared network, the default route is set by the rack router, rather than the DHCP server, which is out of CloudStack control. Therefore, in order for the user VM to get only the default route from the default NIC, modify the configuration of the user VM, and set non-default NIC's accept_ra to 0 explicitly. The accept_ra parameter accepts Router Advertisements and auto-configure /proc/sys/net/ipv6/conf/interface with received data.

28.1.2.2. Limitations of IPv6 in CloudStack

The following are not yet supported:
  1. Security groups
  2. Userdata and metadata
  3. Passwords

28.1.2.3. Guest VM Configuration for DHCPv6

For the guest VMs to get IPv6 address, run dhclient command manually on each of the VMs. Use DUID-LL to set up dhclient.

Note

The IPv6 address is lost when a VM is stopped and started. Therefore, use the same procedure to get an IPv6 address when a VM is stopped and started.
  1. Set up dhclient by using DUID-LL.
    Perform the following for DHCP Client 4.2 and above:
    1. Run the following command on the selected VM to get the dhcpv6 offer from VR:
      dhclient -6 -D LL <dev>
    Perform the following for DHCP Client 4.1:
    1. Open the following to the dhclient configuration file:
      vi /etc/dhcp/dhclient.conf
    2. Add the following to the dhclient configuration file:
      send dhcp6.client-id = concat(00:03:00, hardware);
  2. Get IPv6 address from DHCP server as part of the system or network restart.
    Based on the operating systems, perform the following:
    On CentOS 6.2:
    1. Open the Ethernet interface configuration file:
      vi /etc/sysconfig/network-scripts/ifcfg-eth0
      The ifcfg-eth0 file controls the first NIC in a system.
    2. Make the necessary configuration changes, as given below:
      DEVICE=eth0
      HWADDR=06:A0:F0:00:00:38
      NM_CONTROLLED=no
      ONBOOT=yes
      BOOTPROTO=dhcp6
      TYPE=Ethernet
      USERCTL=no
      PEERDNS=yes
      IPV6INIT=yes
      DHCPV6C=yes
    3. Open the following:
      vi /etc/sysconfig/network
    4. Make the necessary configuration changes, as given below:
      NETWORKING=yes
      HOSTNAME=centos62mgmt.lab.vmops.com
      NETWORKING_IPV6=yes
      IPV6_AUTOCONF=no
    On Ubuntu 12.10
    1. Open the following:
      etc/network/interfaces:
    2. Make the necessary configuration changes, as given below:
      iface eth0 inet6 dhcp
      autoconf 0
      accept_ra 1

28.1.3. Additional VMX Settings

A VMX (.vmx) file is the primary configuration file for a virtual machine. When a new VM is created, information on the operating system, disk sizes, and networking is stored in this file. The VM actively writes to its .vmx file for all the configuration changes. The VMX file is typically located in the directory where the VM is created. In Windows Vista / Windows 7 / Windows Server 2008, the default location is C:\Users\<your_user_name>\My Documents\Virtual Machines\<virtual_machine_name>.vmx. In Linux, vmware-cmd -l lists the full path to all the registered VMX files. Any manual additions to the .vmx file from ESX/ESXi are overwritten by the entries stored in the vCenter Server database. Therefore, before you edit a .vmx file, first remove the VM from the vCenter server's inventory and register the VM again after editing.
The CloudStack API that supports passing some of the VMX settings is registerTemplate. The supported parameters are rootDiskController, nicAdapter, and keyboard. In addition to these existing VMX parameters, you can now use the keyboard.typematicMinDelay parameter in the registerTemplate API call. This parameter controls the amount of delay for the repeated key strokes on remote consoles. For more information on keyboard.typematicMinDelay, see keyboard.typematicMinDelay.

28.1.4. Resetting SSH Keys to Access VMs

Use the resetSSHKeyForVirtualMachine API to set or reset the SSH keypair assigned to a virtual machine. With the addition of this feature, a lost or compromised SSH keypair can be changed, and the user can access the VM by using the new keypair. Just create or register a new keypair, then call resetSSHKeyForVirtualMachine.

28.1.5. Changed API Commands in 4.1

API Commands
Description
createNetworkOffering
The following request parameters have been added:
  • isPersistent
  • startipv6
  • endipv6
  • ip6gateway
  • ip6cidr
listNetworkOfferings
listNetworks
The following request parameters have been added:
  • isPersistent
    This parameter determines if the network or network offering listed are persistent or not.
  • ip6gateway
  • ip6cidr
createVlanIpRange
The following request parameters have been added:
  • startipv6
  • endipv6
  • ip6gateway
  • ip6cidr
deployVirtualMachine
The following parameter has been added: ip6Address.
The following parameter is updated to accept the IPv6 address: iptonetworklist.
CreateZoneCmd
The following parameter have been added: ip6dns1, ip6dns2.
listRouters
listVirtualMachines
For nic responses, the following fields have been added.
  • ip6address
  • ip6gateway
  • ip6cidr
listVlanIpRanges
For nic responses, the following fields have been added.
  • startipv6
  • endipv6
  • ip6gateway
  • ip6cidr
listRouters
listZones
For DomainRouter and DataCenter response, the following fields have been added.
  • ip6dns1
  • ip6dns2
addF5LoadBalancer
configureNetscalerLoadBalancer
addNetscalerLoadBalancer
listF5LoadBalancers
configureF5LoadBalancer
listNetscalerLoadBalancers
The following response parameter is removed: inline.
listFirewallRules
createFirewallRule
The following request parameter is added: traffictype (optional).
listUsageRecords
The following response parameter is added: virtualsize.
deleteIso
The following request parameter is added: forced (optional).
createStoragePool
The following request parameters are made mandatory:
  • podid
  • clusterid
createAccount
The following new request parameters are added: accountid, userid
createUser
The following new request parameter is added: userid
createDomain
The following new request parameter is added: domainid
listZones
The following request parameters is added: securitygroupenabled

28.1.6. Added API Commands in 4.1-incubating

  • createEgressFirewallRules (creates an egress firewall rule on the guest network.)
  • deleteEgressFirewallRules (deletes a egress firewall rule on the guest network.)
  • listEgressFirewallRules (lists the egress firewall rules configured for a guest network.)
  • resetSSHKeyForVirtualMachine (Resets the SSHkey for virtual machine.)
  • addBaremetalHost (Adds a new host.)
  • addNicToVirtualMachine (Adds a new NIC to the specified VM on a selected network.)
  • removeNicFromVirtualMachine (Removes the specified NIC from a selected VM.)
  • updateDefaultNicForVirtualMachine (Updates the specified NIC to be the default one for a selected VM.)
  • addRegion (Registers a Region into another Region.)
  • updateRegion (Updates Region details: ID, Name, Endpoint, User API Key, and User Secret Key.)
  • removeRegion (Removes a Region from current Region.)
  • listRegions (List all the Regions. Filter them by using the ID or Name.)
  • getUser (This API can only be used by the Admin. Get user details by using the API Key.)

28.2. What's New in the API for 4.0

28.2.1. Changed API Commands in 4.0.0-incubating

API Commands
Description
copyTemplate
prepareTemplate
registerTemplate
updateTemplate
createProject
activateProject
suspendProject
updateProject
listProjectAccounts
createVolume
migrateVolume
attachVolume
detachVolume
uploadVolume
createSecurityGroup
registerIso
copyIso
updateIso
createIpForwardingRule
listIpForwardingRules
createLoadBalancerRule
updateLoadBalancerRule
createSnapshot
The commands in this list have a single new response parameter, and no other changes.
New response parameter: tags(*)

Note

Many other commands also have the new tags(*) parameter in addition to other changes; those commands are listed separately.
rebootVirtualMachine
attachIso
detachIso
listLoadBalancerRuleInstances
resetPasswordForVirtualMachine
changeServiceForVirtualMachine
recoverVirtualMachine
startVirtualMachine
migrateVirtualMachine
deployVirtualMachine
assignVirtualMachine
updateVirtualMachine
restoreVirtualMachine
stopVirtualMachine
destroyVirtualMachine
The commands in this list have two new response parameters, and no other changes.
New response parameters: keypair, tags(*)
listSecurityGroups
listFirewallRules
listPortForwardingRules
listSnapshots
listIsos
listProjects
listTemplates
listLoadBalancerRules
The commands in this list have the following new parameters, and no other changes.
New request parameter: tags (optional)
New response parameter: tags(*)
listF5LoadBalancerNetworks
listNetscalerLoadBalancerNetworks
listSrxFirewallNetworks
updateNetwork
The commands in this list have three new response parameters, and no other changes.
New response parameters: canusefordeploy, vpcid, tags(*)
createZone
updateZone
The commands in this list have the following new parameters, and no other changes.
New request parameter: localstorageenabled (optional)
New response parameter: localstorageenabled
listZones
New response parameter: localstorageenabled
rebootRouter
changeServiceForRouter
startRouter
destroyRouter
stopRouter
The commands in this list have two new response parameters, and no other changes.
New response parameters: vpcid, nic(*)
updateAccount
disableAccount
listAccounts
markDefaultZoneForAccount
enableAccount
The commands in this list have three new response parameters, and no other changes.
New response parameters: vpcavailable, vpclimit, vpctotal
listRouters
New request parameters: forvpc (optional), vpcid (optional)
New response parameters: vpcid, nic(*)
listNetworkOfferings
New request parameters: forvpc (optional)
New response parameters: forvpc
listVolumes
New request parameters: details (optional), tags (optional)
New response parameters: tags(*)
addTrafficMonitor
New request parameters: excludezones (optional), includezones (optional)
createNetwork
New request parameters: vpcid (optional)
New response parameters: canusefordeploy, vpcid, tags(*)
listPublicIpAddresses
New request parameters: tags (optional), vpcid (optional)
New response parameters: vpcid, tags(*)
listNetworks
New request parameters: canusefordeploy (optional), forvpc (optional), tags (optional), vpcid (optional)
New response parameters: canusefordeploy, vpcid, tags(*)
restartNetwork
New response parameters: vpcid, tags(*)
enableStaticNat
New request parameter: networkid (optional)
createDiskOffering
New request parameter: storagetype (optional)
New response parameter: storagetype
listDiskOfferings
New response parameter: storagetype
updateDiskOffering
New response parameter: storagetype
createFirewallRule
Changed request parameters: ipaddressid (old version - optional, new version - required)
New response parameter: tags(*)
listVirtualMachines
New request parameters: isoid (optional), tags (optional), templateid (optional)
New response parameters: keypair, tags(*)
updateStorageNetworkIpRange
New response parameters: id, endip, gateway, netmask, networkid, podid, startip, vlan, zoneid

28.2.2. Added API Commands in 4.0.0-incubating

  • createCounter (Adds metric counter)
  • deleteCounter (Deletes a counter)
  • listCounters (List the counters)
  • createCondition (Creates a condition)
  • deleteCondition (Removes a condition)
  • listConditions (List Conditions for the specific user)
  • createTags. Add tags to one or more resources. Example:
    command=createTags
    &resourceIds=1,10,12
    &resourceType=userVm
    &tags[0].key=region
    &tags[0].value=canada
    &tags[1].key=city
    &tags[1].value=Toronto
  • deleteTags. Remove tags from one or more resources. Example:
    command=deleteTags
    &resourceIds=1,12
    &resourceType=Snapshot
    &tags[0].key=city
  • listTags (Show currently defined resource tags)
  • createVPC (Creates a VPC)
  • listVPCs (Lists VPCs)
  • deleteVPC (Deletes a VPC)
  • updateVPC (Updates a VPC)
  • restartVPC (Restarts a VPC)
  • createVPCOffering (Creates VPC offering)
  • updateVPCOffering (Updates VPC offering)
  • deleteVPCOffering (Deletes VPC offering)
  • listVPCOfferings (Lists VPC offerings)
  • createPrivateGateway (Creates a private gateway)
  • listPrivateGateways (List private gateways)
  • deletePrivateGateway (Deletes a Private gateway)
  • createNetworkACL (Creates a ACL rule the given network (the network has to belong to VPC))
  • deleteNetworkACL (Deletes a Network ACL)
  • listNetworkACLs (Lists all network ACLs)
  • createStaticRoute (Creates a static route)
  • deleteStaticRoute (Deletes a static route)
  • listStaticRoutes (Lists all static routes)
  • createVpnCustomerGateway (Creates site to site vpn customer gateway)
  • createVpnGateway (Creates site to site vpn local gateway)
  • createVpnConnection (Create site to site vpn connection)
  • deleteVpnCustomerGateway (Delete site to site vpn customer gateway)
  • deleteVpnGateway (Delete site to site vpn gateway)
  • deleteVpnConnection (Delete site to site vpn connection)
  • updateVpnCustomerGateway (Update site to site vpn customer gateway)
  • resetVpnConnection (Reset site to site vpn connection)
  • listVpnCustomerGateways (Lists site to site vpn customer gateways)
  • listVpnGateways (Lists site 2 site vpn gateways)
  • listVpnConnections (Lists site to site vpn connection gateways)
  • enableCiscoNexusVSM (Enables Nexus 1000v dvSwitch in CloudStack.)
  • disableCiscoNexusVSM (Disables Nexus 1000v dvSwitch in CloudStack.)
  • deleteCiscoNexusVSM (Deletes Nexus 1000v dvSwitch in CloudStack.)
  • listCiscoNexusVSMs (Lists the control VLAN ID, packet VLAN ID, and data VLAN ID, as well as the IP address of the Nexus 1000v dvSwitch.)

28.3. What's New in the API for 3.0

28.3.1. Enabling Port 8096

Port 8096, which allows API calls without authentication, is closed and disabled by default on any fresh 3.0.1 installations. You can enable 8096 (or another port) for this purpose as follows:
  1. Ensure that the first Management Server is installed and running.
  2. Set the global configuration parameter integration.api.port to the desired port.
  3. Restart the Management Server.
  4. On the Management Server host machine, create an iptables rule allowing access to that port.

28.3.2. Stopped VM

CloudStack now supports creating a VM without starting it. You can determine whether the VM needs to be started as part of the VM deployment. A VM can now be deployed in two ways: create and start a VM (the default method); or create a VM and leave it in the stopped state.
A new request parameter, startVM, is introduced in the deployVm API to support the stopped VM feature.
The possible values are:
  • true - The VM starts as a part of the VM deployment.
  • false - The VM is left in the stopped state at the end of the VM deployment.
The default value is true.

28.3.3. Change to Behavior of List Commands

There was a major change in how our List* API commands work in CloudStack 3.0 compared to 2.2.x. The rules below apply only for managed resources – those that belong to an account, domain, or project. They are irrelevant for the List* commands displaying unmanaged (system) resources, such as hosts, clusters, and external network resources.
When no parameters are passed in to the call, the caller sees only resources owned by the caller (even when the caller is the administrator). Previously, the administrator saw everyone else's resources by default.
When accountName and domainId are passed in:
  • The caller sees the resources dedicated to the account specified.
  • If the call is executed by a regular user, the user is authorized to specify only the user's own account and domainId.
  • If the caller is a domain administrator, CloudStack performs an authorization check to see whether the caller is permitted to view resources for the given account and domainId.
When projectId is passed in, only resources belonging to that project are listed.
When domainId is passed in, the call returns only resources belonging to the domain specified. To see the resources of subdomains, use the parameter isRecursive=true. Again, the regular user can see only resources owned by that user, the root administrator can list anything, and a domain administrator is authorized to see only resources of the administrator's own domain and subdomains.
To see all resources the caller is authorized to see, except for Project resources, use the parameter listAll=true.
To see all Project resources the caller is authorized to see, use the parameter projectId=-1.
There is one API command that doesn't fall under the rules above completely: the listTemplates command. This command has its own flags defining the list rules:
listTemplates Flag
Description
featured
Returns templates that have been marked as featured and public.
self
Returns templates that have been registered or created by the calling user.
selfexecutable
Same as self, but only returns templates that are ready to be deployed with.
sharedexecutable
Ready templates that have been granted to the calling user by another user.
executable
Templates that are owned by the calling user, or public templates, that can be used to deploy a new VM.
community
Returns templates that have been marked as public but not featured.
all
Returns all templates (only usable by admins).
The CloudStack UI on a general view will display all resources that the logged-in user is authorized to see, except for project resources. To see the project resources, select the project view.

28.3.4. Removed API commands

  • createConfiguration (Adds configuration value)
  • configureSimulator (Configures simulator)

28.3.5. Added API commands in 3.0

28.3.5.1. Added in 3.0.2

  • changeServiceForSystemVm
    Changes the service offering for a system VM (console proxy or secondary storage). The system VM must be in a "Stopped" state for this command to take effect.

28.3.5.2. Added in 3.0.1

  • changeServiceForSystemVm
    Changes the service offering for a system VM (console proxy or secondary storage). The system VM must be in a "Stopped" state for this command to take effect.

28.3.5.3. Added in 3.0.0

assignVirtualMachine (Move a user VM to another user under same domain.)
restoreVirtualMachine (Restore a VM to original template or specific snapshot)
createLBStickinessPolicy (Creates a Load Balancer stickiness policy )
deleteLBStickinessPolicy (Deletes a LB stickiness policy.)
listLBStickinessPolicies (Lists LBStickiness policies.)
ldapConfig (Configure the LDAP context for this site.)
addSwift (Adds Swift.)
listSwifts (List Swift.)
migrateVolume (Migrate volume)
updateStoragePool (Updates a storage pool.)
authorizeSecurityGroupEgress (Authorizes a particular egress rule for this security group)
revokeSecurityGroupEgress (Deletes a particular egress rule from this security group)
createNetworkOffering (Creates a network offering.)
deleteNetworkOffering (Deletes a network offering.)
createProject (Creates a project)
deleteProject (Deletes a project)
updateProject (Updates a project)
activateProject (Activates a project)
suspendProject (Suspends a project)
listProjects (Lists projects and provides detailed information for listed projects)
addAccountToProject (Adds account to a project)
deleteAccountFromProject (Deletes account from the project)
listProjectAccounts (Lists project's accounts)
listProjectInvitations (Lists an account's invitations to join projects)
updateProjectInvitation (Accepts or declines project invitation)
deleteProjectInvitation (Deletes a project invitation)
updateHypervisorCapabilities (Updates a hypervisor capabilities.)
listHypervisorCapabilities (Lists all hypervisor capabilities.)
createPhysicalNetwork (Creates a physical network)
deletePhysicalNetwork (Deletes a Physical Network.)
listPhysicalNetworks (Lists physical networks)
updatePhysicalNetwork (Updates a physical network)
listSupportedNetworkServices (Lists all network services provided by CloudStack or for the given Provider.)
addNetworkServiceProvider (Adds a network serviceProvider to a physical network)
deleteNetworkServiceProvider (Deletes a Network Service Provider.)
listNetworkServiceProviders (Lists network serviceproviders for a given physical network.)
updateNetworkServiceProvider (Updates a network serviceProvider of a physical network)
addTrafficType (Adds traffic type to a physical network)
deleteTrafficType (Deletes traffic type of a physical network)
listTrafficTypes (Lists traffic types of a given physical network.)
updateTrafficType (Updates traffic type of a physical network)
listTrafficTypeImplementors (Lists implementors of implementor of a network traffic type or implementors of all network traffic types)
createStorageNetworkIpRange (Creates a Storage network IP range.)
deleteStorageNetworkIpRange (Deletes a storage network IP Range.)
listStorageNetworkIpRange (List a storage network IP range.)
updateStorageNetworkIpRange (Update a Storage network IP range, only allowed when no IPs in this range have been allocated.)
listUsageTypes (List Usage Types)
addF5LoadBalancer (Adds a F5 BigIP load balancer device)
configureF5LoadBalancer (configures a F5 load balancer device)
deleteF5LoadBalancer ( delete a F5 load balancer device)
listF5LoadBalancers (lists F5 load balancer devices)
listF5LoadBalancerNetworks (lists network that are using a F5 load balancer device)
addSrxFirewall (Adds a SRX firewall device)
deleteSrxFirewall ( delete a SRX firewall device)
listSrxFirewalls (lists SRX firewall devices in a physical network)
listSrxFirewallNetworks (lists network that are using SRX firewall device)
addNetscalerLoadBalancer (Adds a netscaler load balancer device)
deleteNetscalerLoadBalancer ( delete a netscaler load balancer device)
configureNetscalerLoadBalancer (configures a netscaler load balancer device)
listNetscalerLoadBalancers (lists netscaler load balancer devices)
listNetscalerLoadBalancerNetworks (lists network that are using a netscaler load balancer device)
createVirtualRouterElement (Create a virtual router element.)
configureVirtualRouterElement (Configures a virtual router element.)
listVirtualRouterElements (Lists all available virtual router elements.)

28.3.6. Added CloudStack Error Codes

You can now find the CloudStack-specific error code in the exception response for each type of exception. The following list of error codes is added to the new class named CSExceptionErrorCode.
4250 : "com.cloud.utils.exception.CloudRuntimeException"
4255 : "com.cloud.utils.exception.ExceptionUtil"
4260 : "com.cloud.utils.exception.ExecutionException"
4265 : "com.cloud.utils.exception.HypervisorVersionChangedException"
4270 : "com.cloud.utils.exception.RuntimeCloudException"
4275 : "com.cloud.exception.CloudException"
4280 : "com.cloud.exception.AccountLimitException"
4285 : "com.cloud.exception.AgentUnavailableException"
4290 : "com.cloud.exception.CloudAuthenticationException"
4295 : "com.cloud.exception.CloudExecutionException"
4300 : "com.cloud.exception.ConcurrentOperationException"
4305 : "com.cloud.exception.ConflictingNetworkSettingsException"
4310 : "com.cloud.exception.DiscoveredWithErrorException"
4315 : "com.cloud.exception.HAStateException"
4320 : "com.cloud.exception.InsufficientAddressCapacityException"
4325 : "com.cloud.exception.InsufficientCapacityException"
4330 : "com.cloud.exception.InsufficientNetworkCapacityException"
4335 : "com.cloud.exception.InsufficientServerCapacityException"
4340 : "com.cloud.exception.InsufficientStorageCapacityException"
4345 : "com.cloud.exception.InternalErrorException"
4350 : "com.cloud.exception.InvalidParameterValueException"
4355 : "com.cloud.exception.ManagementServerException"
4360 : "com.cloud.exception.NetworkRuleConflictException"
4365 : "com.cloud.exception.PermissionDeniedException"
4370 : "com.cloud.exception.ResourceAllocationException"
4375 : "com.cloud.exception.ResourceInUseException"
4380 : "com.cloud.exception.ResourceUnavailableException"
4385 : "com.cloud.exception.StorageUnavailableException"
4390 : "com.cloud.exception.UnsupportedServiceException"
4395 : "com.cloud.exception.VirtualMachineMigrationException"
4400 : "com.cloud.exception.AccountLimitException"
4405 : "com.cloud.exception.AgentUnavailableException"
4410 : "com.cloud.exception.CloudAuthenticationException"
4415 : "com.cloud.exception.CloudException"
4420 : "com.cloud.exception.CloudExecutionException"
4425 : "com.cloud.exception.ConcurrentOperationException"
4430 : "com.cloud.exception.ConflictingNetworkSettingsException"
4435 : "com.cloud.exception.ConnectionException"
4440 : "com.cloud.exception.DiscoveredWithErrorException"
4445 : "com.cloud.exception.DiscoveryException"
4450 : "com.cloud.exception.HAStateException"
4455 : "com.cloud.exception.InsufficientAddressCapacityException"
4460 : "com.cloud.exception.InsufficientCapacityException"
4465 : "com.cloud.exception.InsufficientNetworkCapacityException"
4470 : "com.cloud.exception.InsufficientServerCapacityException"
4475 : "com.cloud.exception.InsufficientStorageCapacityException"
4480 : "com.cloud.exception.InsufficientVirtualNetworkCapcityException"
4485 : "com.cloud.exception.InternalErrorException"
4490 : "com.cloud.exception.InvalidParameterValueException"
4495 : "com.cloud.exception.ManagementServerException"
4500 : "com.cloud.exception.NetworkRuleConflictException"
4505 : "com.cloud.exception.PermissionDeniedException"
4510 : "com.cloud.exception.ResourceAllocationException"
4515 : "com.cloud.exception.ResourceInUseException"
4520 : "com.cloud.exception.ResourceUnavailableException"
4525 : "com.cloud.exception.StorageUnavailableException"
4530 : "com.cloud.exception.UnsupportedServiceException"
4535 : "com.cloud.exception.VirtualMachineMigrationException"
9999 : "org.apache.cloudstack.api.ServerApiException"

Chapter 29. Calling the CloudStack API

29.1. Making API Requests

All CloudStack API requests are submitted in the form of a HTTP GET/POST with an associated command and any parameters. A request is composed of the following whether in HTTP or HTTPS:
  • CloudStack API URL: This is the web services API entry point(for example, http://www.cloud.com:8080/client/api)
  • Command: The web services command you wish to execute, such as start a virtual machine or create a disk volume
  • Parameters: Any additional required or optional parameters for the command
A sample API GET request looks like the following:
http://localhost:8080/client/api?command=deployVirtualMachine&serviceOfferingId=1&diskOfferingId=1&templateId=2&zoneId=4&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
Or in a more readable format:
1. http://localhost:8080/client/api
2. ?command=deployVirtualMachine
3. &serviceOfferingId=1
4. &diskOfferingId=1
5. &templateId=2
6. &zoneId=4
7. &apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXqjB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ
8. &signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
The first line is the CloudStack API URL. This is the Cloud instance you wish to interact with.
The second line refers to the command you wish to execute. In our example, we are attempting to deploy a fresh new virtual machine. It is preceded by a (?) to separate itself from the CloudStack API URL.
Lines 3-6 are the parameters for this given command. To see the command and its request parameters, please refer to the appropriate section in the CloudStack API documentation. Each parameter field-value pair (field=value) is preceded by an ampersand character (&).
Line 7 is the user API Key that uniquely identifies the account. See Signing API Requests on page 7.
Line 8 is the signature hash created to authenticate the user account executing the API command. See Signing API Requests on page 7.

29.2. Signing API Requests

Whether you access the CloudStack API with HTTP or HTTPS, it must still be signed so that CloudStack can verify the caller has been authenticated and authorized to execute the command. Make sure that you have both the API Key and Secret Key provided by the CloudStack administrator for your account before proceeding with the signing process.
To show how to sign a request, we will re-use the previous example.
http://http://localhost:8080/client/api?command=deployVirtualMachine&serviceOfferingId=1&diskOfferingId=1&templateId=2&zoneId=4&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
Breaking this down, we have several distinct parts to this URL.
  • Base URL: This is the base URL to the CloudStack Management Server.
    http://localhost:8080
  • API Path: This is the path to the API Servlet that processes the incoming requests.
    /client/api?
  • Command String: This part of the query string comprises of the command, its parameters, and the API Key that identifies the account.

    Note

    As with all query string parameters of field-value pairs, the "field" component is case insensitive while all "value" values are case sensitive.
    command=deployVirtualMachine&serviceOfferingId=1&diskOfferingId=1&templateId=2&zoneId=4&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ
  • Signature: This is the hashed signature of the Base URL that is generated using a combination of the user’s Secret Key and the HMAC SHA-1 hashing algorithm.
    &signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D
Every API request has the format Base URL+API Path+Command String+Signature.
To generate the signature.
  1. For each field-value pair (as separated by a '&') in the Command String, URL encode each value so that it can be safely sent via HTTP GET.

    Note

    Make sure all spaces are encoded as "%20" rather than "+".
  2. Lower case the entire Command String and sort it alphabetically via the field for each field-value pair. The result of this step would look like the following.
    apikey=mivr6x7u6bn_sdahobpjnejpgest35exq-jb8cg20yi3yaxxcgpyuairmfi_ejtvwz0nukkjbpmy3y2bcikwfq&command=deployvirtualmachine&diskofferingid=1&serviceofferingid=1&templateid=2&zoneid=4
  3. Take the sorted Command String and run it through the HMAC SHA-1 hashing algorithm (most programming languages offer a utility method to do this) with the user’s Secret Key. Base64 encode the resulting byte array in UTF-8 so that it can be safely transmitted via HTTP. The final string produced after Base64 encoding should be "Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D".
    By reconstructing the final URL in the format Base URL+API Path+Command String+Signature, the final URL should look like:
    http://localhost:8080/client/api?command=deployVirtualMachine&serviceOfferingId=1&diskOfferingId=1&templateId=2&zoneId=4&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D

29.2.1. How to sign an API call with Python

To illustrate the procedure used to sign API calls we present a step by step interactive session using Python.
First import the required modules:

 
$python
Python 2.7.3 (default, Nov 17 2012, 19:54:34) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2
>>> import urllib
>>> import hashlib
>>> import hmac
>>> import base64
 
Define the endpoint of the Cloud, the command that you want to execute and the keys of the user.
 

>>> baseurl='http://localhost:8080/client/api?'
>>> request={}
>>> request['command']='listUsers'
>>> request['response']='json'
>>> request['apikey']='plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg'
>>> secretkey='VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ'
  
Build the request string:
 
>>> request_str='&'.join(['='.join([k,urllib.quote_plus(request[k])]) for k in request.keys()])
>>> request_str
'apikey=plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json'
  
Compute the signature with hmac, do a 64 bit encoding and a url encoding:
  
>>> sig_str='&'.join(['='.join([k.lower(),urllib.quote_plus(request[k].lower().replace('+','%20'))])for k in sorted(request.iterkeys())]) 
>>> sig_str
'apikey=plgwjfzk4gys3momtvmjuvg-x-jlwlnfauj9gabbbf9edm-kaymmailqzzq1elzlyq_u38zcm0bewzgudp66mg&command=listusers&response=json'
>>> sig=hmac.new(secretkey,sig_str,hashlib.sha1)
>>> sig
<hmac.HMAC instance at 0x10d91d680>
>>> sig=hmac.new(secretkey,sig_str,hashlib.sha1).digest()
>>> sig
'M:]\x0e\xaf\xfb\x8f\xf2y\xf1p\x91\x1e\x89\x8a\xa1\x05\xc4A\xdb'
>>> sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest())
>>> sig
'TTpdDq/7j/J58XCRHomKoQXEQds=\n'
>>> sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()).strip()
>>> sig
'TTpdDq/7j/J58XCRHomKoQXEQds='
>>> sig=urllib.quote_plus(base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()).strip())
  
Finally, build the entire string and do an http GET:
  
>>> req=baseurl+request_str+'&signature='+sig
>>> req
'http://localhost:8080/client/api?apikey=plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json&signature=TTpdDq%2F7j%2FJ58XCRHomKoQXEQds%3D'
>>> res=urllib2.urlopen(req)
>>> res.read()
'{ "listusersresponse" : { "count":3 ,"user" : [  {"id":"7ed6d5da-93b2-4545-a502-23d20b48ef2a","username":"admin","firstname":"admin","lastname":"cloud","created":"2012-07-05T12:18:27-0700","state":"enabled","account":"admin","accounttype":1,"domainid":"8a111e58-e155-4482-93ce-84efff3c7c77","domain":"ROOT","apikey":"plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg","secretkey":"VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ","accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}, {"id":"1fea6418-5576-4989-a21e-4790787bbee3","username":"runseb","firstname":"foobar","lastname":"goa","email":"joe@smith.com","created":"2013-04-10T16:52:06-0700","state":"enabled","account":"admin","accounttype":1,"domainid":"8a111e58-e155-4482-93ce-84efff3c7c77","domain":"ROOT","apikey":"Xhsb3MewjJQaXXMszRcLvQI9_NPy_UcbDj1QXikkVbDC9MDSPwWdtZ1bUY1H7JBEYTtDDLY3yuchCeW778GkBA","secretkey":"gIsgmi8C5YwxMHjX5o51pSe0kqs6JnKriw0jJBLceY5bgnfzKjL4aM6ctJX-i1ddQIHJLbLJDK9MRzsKk6xZ_w","accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}, {"id":"52f65396-183c-4473-883f-a37e7bb93967","username":"toto","firstname":"john","lastname":"smith","email":"john@smith.com","created":"2013-04-23T04:27:22-0700","state":"enabled","account":"admin","accounttype":1,"domainid":"8a111e58-e155-4482-93ce-84efff3c7c77","domain":"ROOT","apikey":"THaA6fFWS_OmvU8od201omxFC8yKNL_Hc5ZCS77LFCJsRzSx48JyZucbUul6XYbEg-ZyXMl_wuEpECzK-wKnow","secretkey":"O5ywpqJorAsEBKR_5jEvrtGHfWL1Y_j1E4Z_iCr8OKCYcsPIOdVcfzjJQ8YqK0a5EzSpoRrjOFiLsG0hQrYnDA","accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"} ] } }'
  

29.3. Enabling API Call Expiration

You can set an expiry timestamp on API calls to prevent replay attacks over non-secure channels, such as HTTP. The server tracks the expiry timestamp you have specified and rejects all the subsequent API requests that come in after this validity period.
To enable this feature, add the following parameters to the API request:
  • signatureVersion=3: If the signatureVersion parameter is missing or is not equal to 3, the expires parameter is ignored in the API request.
  • expires=YYYY-MM-DDThh:mm:ssZ: Specifies the date and time at which the signature included in the request is expired. The timestamp is expressed in the YYYY-MM-DDThh:mm:ssZ format, as specified in the ISO 8601 standard.
For example:
expires=2011-10-10T12:00:00+0530
A sample API request with expiration is given below:
http://<IPAddress>:8080/client/api?command=listZones&signatureVersion=3&expires=2011-10-10T12:00:00+0530&apiKey=miVr6X7u6bN_sdahOBpjNejPgEsT35eXq-jB8CG20YI3yaxXcgpyuaIRmFI_EJTVwZ0nUkkJbPmY3y2bciKwFQ&signature=Lxx1DM40AjcXU%2FcaiK8RAP0O1hU%3D

29.4. Limiting the Rate of API Requests

You can limit the rate at which API requests can be placed for each account. This is useful to avoid malicious attacks on the Management Server, prevent performance degradation, and provide fairness to all accounts.
If the number of API calls exceeds the threshold, an error message is returned for any additional API calls. The caller will have to retry these API calls at another time.

29.4.1. Configuring the API Request Rate

To control the API request rate, use the following global configuration settings:
  • api.throttling.enabled - Enable/Disable API throttling. By default, this setting is false, so API throttling is not enabled.
  • api.throttling.interval (in seconds) - Time interval during which the number of API requests is to be counted. When the interval has passed, the API count is reset to 0.
  • api.throttling.max - Maximum number of APIs that can be placed within the api.throttling.interval period.
  • api.throttling.cachesize - Cache size for storing API counters. Use a value higher than the total number of accounts managed by the cloud. One cache entry is needed for each account, to store the running API total for that account.

29.4.2. Limitations on API Throttling

The following limitations exist in the current implementation of this feature.

Note

Even with these limitations, CloudStack is still able to effectively use API throttling to avoid malicious attacks causing denial of service.
  • In a deployment with multiple Management Servers, the cache is not synchronized across them. In this case, CloudStack might not be able to ensure that only the exact desired number of API requests are allowed. In the worst case, the number of API calls that might be allowed is (number of Management Servers) * (api.throttling.max).
  • The API commands resetApiLimit and getApiLimit are limited to the Management Server where the API is invoked.

29.5. Responses

29.5.1. Response Formats: XML and JSON

CloudStack supports two formats as the response to an API call. The default response is XML. If you would like the response to be in JSON, add &response=json to the Command String.
The two response formats differ in how they handle blank fields. In JSON, if there is no value for a response field, it will not appear in the response. If all the fields were empty, there might be no response at all. In XML, even if there is no value to be returned, an empty field will be returned as a placeholder XML element.
Sample XML Response:
     <listipaddressesresponse> 
        <allocatedipaddress>
        <ipaddress>192.168.10.141</ipaddress> 
        <allocated>2009-09-18T13:16:10-0700</allocated> 
        <zoneid>4</zoneid> 
            <zonename>WC</zonename> 
            <issourcenat>true</issourcenat> 
        </allocatedipaddress>
     </listipaddressesresponse>
Sample JSON Response:
        { "listipaddressesresponse" : 
          { "allocatedipaddress" :
            [ 
              { 
                "ipaddress" : "192.168.10.141", 
                "allocated" : "2009-09-18T13:16:10-0700",
                "zoneid" : "4", 
                "zonename" : "WC", 
                "issourcenat" : "true" 
              } 
            ]
          } 
        } 

29.5.2. Maximum Result Pages Returned

For each cloud, there is a default upper limit on the number of results that any API command will return in a single page. This is to help prevent overloading the cloud servers and prevent DOS attacks. For example, if the page size limit is 500 and a command returns 10,000 results, the command will return 20 pages.
The default page size limit can be different for each cloud. It is set in the global configuration parameter default.page.size. If your cloud has many users with lots of VMs, you might need to increase the value of this parameter. At the same time, be careful not to set it so high that your site can be taken down by an enormous return from an API call. For more information about how to set global configuration parameters, see "Describe Your Deployment" in the Installation Guide.
To decrease the page size limit for an individual API command, override the global setting with the page and pagesize parameters, which are available in any list* command (listCapabilities, listDiskOfferings, etc.).
  • Both parameters must be specified together.
  • The value of the pagesize parameter must be smaller than the value of default.page.size. That is, you can not increase the number of possible items in a result page, only decrease it.
For syntax information on the list* commands, see the API Reference.

29.5.3. Error Handling

If an error occurs while processing an API request, the appropriate response in the format specified is returned. Each error response consists of an error code and an error text describing what possibly can go wrong. For an example error response, see page 12.
An HTTP error code of 401 is always returned if API request was rejected due to bad signatures, missing API Keys, or the user simply did not have the permissions to execute the command.

29.6. Asynchronous Commands

Asynchronous commands were introduced in CloudStack 2.x. Commands are designated as asynchronous when they can potentially take a long period of time to complete such as creating a snapshot or disk volume. They differ from synchronous commands by the following:
  • They are identified in the API Reference by an (A).
  • They will immediately return a job ID to refer to the job that will be responsible in processing the command.
  • If executed as a "create" resource command, it will return the resource ID as well as the job ID.
    You can periodically check the status of the job by making a simple API call to the command, queryAsyncJobResult and passing in the job ID.

29.6.1. Job Status

The key to using an asynchronous command is the job ID that is returned immediately once the command has been executed. With the job ID, you can periodically check the job status by making calls to queryAsyncJobResult command. The command will return three possible job status integer values:
  • 0 - Job is still in progress. Continue to periodically poll for any status changes.
  • 1 - Job has successfully completed. The job will return any successful response values associated with command that was originally executed.
  • 2 - Job has failed to complete. Please check the "jobresultcode" tag for failure reason code and "jobresult" for the failure reason.

29.6.2. Example

The following shows an example of using an asynchronous command. Assume the API command:
command=deployVirtualMachine&zoneId=1&serviceOfferingId=1&diskOfferingId=1&templateId=1
CloudStack will immediately return a job ID and any other additional data.
         <deployvirtualmachineresponse> 
              <jobid>1</jobid>
             <id>100</id>
         </deployvirtualmachineresponse>
Using the job ID, you can periodically poll for the results by using the queryAsyncJobResult command.
command=queryAsyncJobResult&jobId=1
Three possible results could come from this query.
Job is still pending:
         <queryasyncjobresult> 
              <jobid>1</jobid>
              <jobstatus>0</jobstatus>
              <jobprocstatus>1</jobprocstatus>
         </queryasyncjobresult>
Job has succeeded:
            <queryasyncjobresultresponse cloud-stack-version="3.0.1.6">
                  <jobid>1</jobid>
                  <jobstatus>1</jobstatus>
                  <jobprocstatus>0</jobprocstatus>
                 <jobresultcode>0</jobresultcode>
                  <jobresulttype>object</jobresulttype>
                  <jobresult>
                    <virtualmachine>
                    <id>450</id>
                    <name>i-2-450-VM</name>
                    <displayname>i-2-450-VM</displayname>
                    <account>admin</account>
                    <domainid>1</domainid>
                    <domain>ROOT</domain>
                    <created>2011-03-10T18:20:25-0800</created>
                    <state>Running</state>
                    <haenable>false</haenable>
                    <zoneid>1</zoneid>
                    <zonename>San Jose 1</zonename>
                    <hostid>2</hostid>
                    <hostname>905-13.sjc.lab.vmops.com</hostname>
                    <templateid>1</templateid>
                    <templatename>CentOS 5.3 64bit LAMP</templatename>
                    <templatedisplaytext>CentOS 5.3 64bit LAMP</templatedisplaytext>
                    <passwordenabled>false</passwordenabled>
                    <serviceofferingid>1</serviceofferingid>
                    <serviceofferingname>Small Instance</serviceofferingname>
                    <cpunumber>1</cpunumber>
                    <cpuspeed>500</cpuspeed>
                    <memory>512</memory>
                    <guestosid>12</guestosid>
                    <rootdeviceid>0</rootdeviceid>
                    <rootdevicetype>NetworkFilesystem</rootdevicetype>
                    <nic>
                      <id>561</id>
                      <networkid>205</networkid>
                      <netmask>255.255.255.0</netmask>
                      <gateway>10.1.1.1</gateway>
                      <ipaddress>10.1.1.225</ipaddress>
                      <isolationuri>vlan://295</isolationuri>
                      <broadcasturi>vlan://295</broadcasturi>
                      <traffictype>Guest</traffictype>
                      <type>Virtual</type>
                      <isdefault>true</isdefault>
                    </nic>
                    <hypervisor>XenServer</hypervisor>
                   </virtualmachine>
                 </jobresult>
            </queryasyncjobresultresponse>
Job has failed:
            <queryasyncjobresult>
                  <jobid>1</jobid> 
                  <jobstatus>2</jobstatus> 
                  <jobprocstatus>0</jobprocstatus>
                  <jobresultcode>551</jobresultcode>
                  <jobresulttype>text</jobresulttype>
                  <jobresult>Unable to deploy virtual machine id = 100 due to not enough capacity</jobresult> 
            </queryasyncjobresult>

Chapter 30. Working With Usage Data

The Usage Server provides aggregated usage records which you can use to create billing integration for the CloudStack platform. The Usage Server works by taking data from the events log and creating summary usage records that you can access using the listUsageRecords API call.
The usage records show the amount of resources, such as VM run time or template storage space, consumed by guest instances. In the special case of bare metal instances, no template storage resources are consumed, but records showing zero usage are still included in the Usage Server's output.
The Usage Server runs at least once per day. It can be configured to run multiple times per day. Its behavior is controlled by configuration settings as described in the CloudStack Administration Guide.

30.1. Usage Record Format

30.1.1. Virtual Machine Usage Record Format

For running and allocated virtual machine usage, the following fields exist in a usage record:
  • account – name of the account
  • accountid – ID of the account
  • domainid – ID of the domain in which this account resides
  • zoneid – Zone where the usage occurred
  • description – A string describing what the usage record is tracking
  • usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for VM running time)
  • usagetype – A number representing the usage type (see Usage Types)
  • rawusage – A number representing the actual usage in hours
  • virtualMachineId – The ID of the virtual machine
  • name – The name of the virtual machine
  • offeringid – The ID of the service offering
  • templateid – The ID of the template or the ID of the parent template. The parent template value is present when the current template was created from a volume.
  • usageid – Virtual machine
  • type – Hypervisor
  • startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record

30.1.2. Network Usage Record Format

For network usage (bytes sent/received), the following fields exist in a usage record.
  • account – name of the account
  • accountid – ID of the account
  • domainid – ID of the domain in which this account resides
  • zoneid – Zone where the usage occurred
  • description – A string describing what the usage record is tracking
  • usagetype – A number representing the usage type (see Usage Types)
  • rawusage – A number representing the actual usage in hours
  • usageid – Device ID (virtual router ID or external device ID)
  • type – Device type (domain router, external load balancer, etc.)
  • startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record

30.1.3. IP Address Usage Record Format

For IP address usage the following fields exist in a usage record.
  • account - name of the account
  • accountid - ID of the account
  • domainid - ID of the domain in which this account resides
  • zoneid - Zone where the usage occurred
  • description - A string describing what the usage record is tracking
  • usage - String representation of the usage, including the units of usage
  • usagetype - A number representing the usage type (see Usage Types)
  • rawusage - A number representing the actual usage in hours
  • usageid - IP address ID
  • startdate, enddate - The range of time for which the usage is aggregated; see Dates in the Usage Record
  • issourcenat - Whether source NAT is enabled for the IP address
  • iselastic - True if the IP address is elastic.

30.1.4. Disk Volume Usage Record Format

For disk volumes, the following fields exist in a usage record.
  • account – name of the account
  • accountid – ID of the account
  • domainid – ID of the domain in which this account resides
  • zoneid – Zone where the usage occurred
  • description – A string describing what the usage record is tracking
  • usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
  • usagetype – A number representing the usage type (see Usage Types)
  • rawusage – A number representing the actual usage in hours
  • usageid – The volume ID
  • offeringid – The ID of the disk offering
  • type – Hypervisor
  • templateid – ROOT template ID
  • size – The amount of storage allocated
  • startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record

30.1.5. Template, ISO, and Snapshot Usage Record Format

  • account – name of the account
  • accountid – ID of the account
  • domainid – ID of the domain in which this account resides
  • zoneid – Zone where the usage occurred
  • description – A string describing what the usage record is tracking
  • usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
  • usagetype – A number representing the usage type (see Usage Types)
  • rawusage – A number representing the actual usage in hours
  • usageid – The ID of the the template, ISO, or snapshot
  • offeringid – The ID of the disk offering
  • templateid – – Included only for templates (usage type 7). Source template ID.
  • size – Size of the template, ISO, or snapshot
  • startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record

30.1.6. Load Balancer Policy or Port Forwarding Rule Usage Record Format

  • account - name of the account
  • accountid - ID of the account
  • domainid - ID of the domain in which this account resides
  • zoneid - Zone where the usage occurred
  • description - A string describing what the usage record is tracking
  • usage - String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
  • usagetype - A number representing the usage type (see Usage Types)
  • rawusage - A number representing the actual usage in hours
  • usageid - ID of the load balancer policy or port forwarding rule
  • usagetype - A number representing the usage type (see Usage Types)
  • startdate, enddate - The range of time for which the usage is aggregated; see Dates in the Usage Record

30.1.7. Network Offering Usage Record Format

  • account – name of the account
  • accountid – ID of the account
  • domainid – ID of the domain in which this account resides
  • zoneid – Zone where the usage occurred
  • description – A string describing what the usage record is tracking
  • usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
  • usagetype – A number representing the usage type (see Usage Types)
  • rawusage – A number representing the actual usage in hours
  • usageid – ID of the network offering
  • usagetype – A number representing the usage type (see Usage Types)
  • offeringid – Network offering ID
  • virtualMachineId – The ID of the virtual machine
  • virtualMachineId – The ID of the virtual machine
  • startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record

30.1.8. VPN User Usage Record Format

  • account – name of the account
  • accountid – ID of the account
  • domainid – ID of the domain in which this account resides
  • zoneid – Zone where the usage occurred
  • description – A string describing what the usage record is tracking
  • usage – String representation of the usage, including the units of usage (e.g. 'Hrs' for hours)
  • usagetype – A number representing the usage type (see Usage Types)
  • rawusage – A number representing the actual usage in hours
  • usageid – VPN user ID
  • usagetype – A number representing the usage type (see Usage Types)
  • startdate, enddate – The range of time for which the usage is aggregated; see Dates in the Usage Record

30.2. Usage Types

The following table shows all usage types.
Type ID Type Name Description
1 RUNNING_VM Tracks the total running time of a VM per usage record period. If the VM is upgraded during the usage period, you will get a separate Usage Record for the new upgraded VM.
2
ALLOCATED_VM
Tracks the total time the VM has been created to the time when it has been destroyed. This usage type is also useful in determining usage for specific templates such as Windows-based templates.
3
IP_ADDRESS
Tracks the public IP address owned by the account.
4
NETWORK_BYTES_SENT
Tracks the total number of bytes sent by all the VMs for an account. Cloud.com does not currently track network traffic per VM.
5
NETWORK_BYTES_RECEIVED
Tracks the total number of bytes received by all the VMs for an account. Cloud.com does not currently track network traffic per VM.
6
VOLUME
Tracks the total time a disk volume has been created to the time when it has been destroyed.
7
TEMPLATE
Tracks the total time a template (either created from a snapshot or uploaded to the cloud) has been created to the time it has been destroyed. The size of the template is also returned.
8
ISO
Tracks the total time an ISO has been uploaded to the time it has been removed from the cloud. The size of the ISO is also returned.
9
SNAPSHOT
Tracks the total time from when a snapshot has been created to the time it have been destroyed.
11
LOAD_BALANCER_POLICY
Tracks the total time a load balancer policy has been created to the time it has been removed. Cloud.com does not track whether a VM has been assigned to a policy.
12
PORT_FORWARDING_RULE
Tracks the time from when a port forwarding rule was created until the time it was removed.
13
NETWORK_OFFERING
The time from when a network offering was assigned to a VM until it is removed.
14
VPN_USERS
The time from when a VPN user is created until it is removed.

30.3. Example response from listUsageRecords

All CloudStack API requests are submitted in the form of a HTTP GET/POST with an associated command and any parameters. A request is composed of the following whether in HTTP or HTTPS:
            <listusagerecordsresponse>
                  <count>1816</count>
                 <usagerecord>
                    <account>user5</account>
                    <accountid>10004</accountid>
                    <domainid>1</domainid>
                    <zoneid>1</zoneid>
                        <description>i-3-4-WC running time (ServiceOffering: 1) (Template: 3)</description>
                    <usage>2.95288 Hrs</usage>
                       <usagetype>1</usagetype>
                    <rawusage>2.95288</rawusage>
                       <virtualmachineid>4</virtualmachineid>
                    <name>i-3-4-WC</name>
                       <offeringid>1</offeringid>
                    <templateid>3</templateid>
                    <usageid>245554</usageid>
                    <type>XenServer</type>
                    <startdate>2009-09-15T00:00:00-0700</startdate>
                    <enddate>2009-09-18T16:14:26-0700</enddate>
                  </usagerecord>

               … (1,815 more usage records)
            </listusagerecordsresponse>

30.4. Dates in the Usage Record

Usage records include a start date and an end date. These dates define the period of time for which the raw usage number was calculated. If daily aggregation is used, the start date is midnight on the day in question and the end date is 23:59:59 on the day in question (with one exception; see below). A virtual machine could have been deployed at noon on that day, stopped at 6pm on that day, then started up again at 11pm. When usage is calculated on that day, there will be 7 hours of running VM usage (usage type 1) and 12 hours of allocated VM usage (usage type 2). If the same virtual machine runs for the entire next day, there will 24 hours of both running VM usage (type 1) and allocated VM usage (type 2).
Note: The start date is not the time a virtual machine was started, and the end date is not the time when a virtual machine was stopped. The start and end dates give the time range within which usage was calculated.
For network usage, the start date and end date again define the range in which the number of bytes transferred was calculated. If a user downloads 10 MB and uploads 1 MB in one day, there will be two records, one showing the 10 megabytes received and one showing the 1 megabyte sent.
There is one case where the start date and end date do not correspond to midnight and 11:59:59pm when daily aggregation is used. This occurs only for network usage records. When the usage server has more than one day's worth of unprocessed data, the old data will be included in the aggregation period. The start date in the usage record will show the date and time of the earliest event. For other types of usage, such as IP addresses and VMs, the old unprocessed data is not included in daily aggregation.

Time Zones

The following time zone identifiers are accepted by CloudStack. There are several places that have a time zone as a required or optional parameter. These include scheduling recurring snapshots, creating a user, and specifying the usage time zone in the Configuration table.
Etc/GMT+12
Etc/GMT+11
Pacific/Samoa
Pacific/Honolulu
US/Alaska
America/Los_Angeles
Mexico/BajaNorte
US/Arizona
US/Mountain
America/Chihuahua
America/Chicago
America/Costa_Rica
America/Mexico_City
Canada/Saskatchewan
America/Bogota
America/New_York
America/Caracas
America/Asuncion
America/Cuiaba
America/Halifax
America/La_Paz
America/Santiago
America/St_Johns
America/Araguaina
America/Argentina/Buenos_Aires
America/Cayenne
America/Godthab
America/Montevideo
Etc/GMT+2
Atlantic/Azores
Atlantic/Cape_Verde
Africa/Casablanca
Etc/UTC
Atlantic/Reykjavik
Europe/London
CET
Europe/Bucharest
Africa/Johannesburg
Asia/Beirut
Africa/Cairo
Asia/Jerusalem
Europe/Minsk
Europe/Moscow
Africa/Nairobi
Asia/Karachi
Asia/Kolkata
Asia/Bangkok
Asia/Shanghai
Asia/Kuala_Lumpur
Australia/Perth
Asia/Taipei
Asia/Tokyo
Asia/Seoul
Australia/Adelaide
Australia/Darwin
Australia/Brisbane
Australia/Canberra
Pacific/Guam
Pacific/Auckland

Event Types

VM.CREATE
TEMPLATE.EXTRACT
SG.REVOKE.INGRESS
VM.DESTROY
TEMPLATE.UPLOAD
HOST.RECONNECT
VM.START
TEMPLATE.CLEANUP
MAINT.CANCEL
VM.STOP
VOLUME.CREATE
MAINT.CANCEL.PS
VM.REBOOT
VOLUME.DELETE
MAINT.PREPARE
VM.UPGRADE
VOLUME.ATTACH
MAINT.PREPARE.PS
VM.RESETPASSWORD
VOLUME.DETACH
VPN.REMOTE.ACCESS.CREATE
ROUTER.CREATE
VOLUME.UPLOAD
VPN.USER.ADD
ROUTER.DESTROY
SERVICEOFFERING.CREATE
VPN.USER.REMOVE
ROUTER.START
SERVICEOFFERING.UPDATE
NETWORK.RESTART
ROUTER.STOP
SERVICEOFFERING.DELETE
UPLOAD.CUSTOM.CERTIFICATE
ROUTER.REBOOT
DOMAIN.CREATE
UPLOAD.CUSTOM.CERTIFICATE
ROUTER.HA
DOMAIN.DELETE
STATICNAT.DISABLE
PROXY.CREATE
DOMAIN.UPDATE
SSVM.CREATE
PROXY.DESTROY
SNAPSHOT.CREATE
SSVM.DESTROY
PROXY.START
SNAPSHOT.DELETE
SSVM.START
PROXY.STOP
SNAPSHOTPOLICY.CREATE
SSVM.STOP
PROXY.REBOOT
SNAPSHOTPOLICY.UPDATE
SSVM.REBOOT
PROXY.HA
SNAPSHOTPOLICY.DELETE
SSVM.H
VNC.CONNECT
VNC.DISCONNECT
NET.IPASSIGN
NET.IPRELEASE
NET.RULEADD
NET.RULEDELETE
NET.RULEMODIFY
NETWORK.CREATE
NETWORK.DELETE
LB.ASSIGN.TO.RULE
LB.REMOVE.FROM.RULE
LB.CREATE
LB.DELETE
LB.UPDATE
USER.LOGIN
USER.LOGOUT
USER.CREATE
USER.DELETE
USER.UPDATE
USER.DISABLE
TEMPLATE.CREATE
TEMPLATE.DELETE
TEMPLATE.UPDATE
TEMPLATE.COPY
TEMPLATE.DOWNLOAD.START
TEMPLATE.DOWNLOAD.SUCCESS
TEMPLATE.DOWNLOAD.FAILED
ISO.CREATE
ISO.DELETE
ISO.COPY
ISO.ATTACH
ISO.DETACH
ISO.EXTRACT
ISO.UPLOAD
SERVICE.OFFERING.CREATE
SERVICE.OFFERING.EDIT
SERVICE.OFFERING.DELETE
DISK.OFFERING.CREATE
DISK.OFFERING.EDIT
DISK.OFFERING.DELETE
NETWORK.OFFERING.CREATE
NETWORK.OFFERING.EDIT
NETWORK.OFFERING.DELETE
POD.CREATE
POD.EDIT
POD.DELETE
ZONE.CREATE
ZONE.EDIT
ZONE.DELETE
VLAN.IP.RANGE.CREATE
VLAN.IP.RANGE.DELETE
CONFIGURATION.VALUE.EDIT
SG.AUTH.INGRESS

Alerts

The following is the list of alert type numbers. The current alerts can be found by calling listAlerts.
MEMORY = 0
CPU = 1
STORAGE =2
STORAGE_ALLOCATED = 3
PUBLIC_IP = 4
PRIVATE_IP = 5
HOST = 6
USERVM = 7
DOMAIN_ROUTER = 8
CONSOLE_PROXY = 9
ROUTING = 10// lost connection to default route (to the gateway)
STORAGE_MISC = 11 // lost connection to default route (to the gateway)
USAGE_SERVER = 12 // lost connection to default route (to the gateway)
MANAGMENT_NODE = 13 // lost connection to default route (to the gateway)
DOMAIN_ROUTER_MIGRATE = 14
CONSOLE_PROXY_MIGRATE = 15
USERVM_MIGRATE = 16
VLAN = 17
SSVM = 18
USAGE_SERVER_RESULT = 19
STORAGE_DELETE = 20;
UPDATE_RESOURCE_COUNT = 21; //Generated when we fail to update the resource count
USAGE_SANITY_RESULT = 22;
DIRECT_ATTACHED_PUBLIC_IP = 23;
LOCAL_STORAGE = 24;
RESOURCE_LIMIT_EXCEEDED = 25; //Generated when the resource limit exceeds the limit. Currently used for recurring snapshots only