Getting Started

Requirements

Morpheus is a software based appliance installation capable of orchestrating many clouds and hypervisors. Before an installation is started it is important to understand some of the base requirements.

In the simplest configuration Morpheus needs one Appliance Server. The Appliance Server, by default, contains all the components necessary to orchestrate both vm’s and containers. To get started some base requirements are recommended:

Base Requirements

  • Operating System: Ubuntu 14.04 / 16.04 or CentOS/RHEL greater than 7.0.
  • Memory: 8 GB minimum
  • Storage: 100 GB storage minimum
  • Network connectivity from your users to the appliance over TCP 443 (HTTPS)
  • Inbound connectivity access from provisioned vm’s and container hosts on ports 443 and 80 (needed for agent communication)
  • Internet Connectivity from Appliance (To download from Morpheus’ public docker repositories and virtual image catalog)
  • Superuser privileges via the sudo command for the user installing the Morpheus Appliance package.
  • An Appliance URL that is accessible to all managed hosts. It is necessary for all hosts that are managed by Morpheus to be able to communicate with the appliance server ip on port 443. This URL is configured under Admin->Settings. Morpheus also utilizes SSH (Port 22) and Windows Remote Management (Port 5985) to initialize a server.
  • An Appliance License is required for any operations involving provisioning.

Note

Ubuntu 16.10 and Amazon Linux are not supported.

Storage

Morpheus needs storage space for a few items. One is for the built-in Elasticsearch store (used for log aggregation and stats collection metrics). Morpheus also keeps a workspace and local virtual image cache for doing virtual image conversion and blueprint upload. While the permanent store of these can be offloaded via a Storage Provider some space is still recommended for dealing with non streamable virtual image formats.

In many common scenarios it might be prudent to configure a shared datastore on a storage cluster and mounted to /var/opt/morpheus/morpheus-ui (this is where all user based data and database data is persisted). There are several folders located within here that can be independently located as desired.

Network Connectivity

Morpheus primarily operates via communication with its agent that is installed on all managed vm’s or docker hosts. This is a lightweight agent responsible for aggregating logs and stats and sending them back to the client with minimal network traffic overhead. It also is capable of processing instructions related to provisioning and deployments instigated by the appliance server.

The Morpheus Agent exists for both linux and windows based platforms and opens NO ports on the guest operating system. Instead it makes an outbound SSL (https / wss) connection to the appliance server. This is what is known as the appliance url during configuration (in Admin->Settings). When the agent is started it automatically makes this connection and securely authenticates. Therefore, it is necessary for all vm’s and docker based hosts that are managed by morpheus to be able to reach the appliance server ip on port 443.

Morpheus also utilizes SSH (Port 22) and Windows Remote Management (Port 5985) to initialize a server. This includes sending remote command instructions to install the agent. It is actually possible for Morpheus to operate without agent connectivity (though stats and logs will not function) and utilize SSH/WinRM to perform operations. Once the agent is installed and connections are established SSH/WinRM communication will stop. This is why an outbound requirement exists for the appliance server to be able to utilize port 22 and 5985.

Note

In newer versions of morpheus this outbound connectivity is not mandatory. The agent can be installed by hand or via Guest Process API’s on cloud integrations like VMware.

Components

The Appliance Server automatically installs several components for the operation of Morpheus . This includes:

  • RabbitMQ (Messaging)
  • MySQL (Logistical Data store)
  • Elasticsearch (Logs / Metrics store)
  • Redis (Cache store)
  • Tomcat (Morpheus Application)
  • Nginx (Web frontend)
  • Guacamole (Remote console service for clientless remote console)
  • Check Server (Monitoring Agent for custom checks added via UI)

All of these are installed in an isolated way using chef zero to /opt/morpheus. It is also important to note these services can be offloaded to separate servers or clusters as desired. For details check the installation section and high availability.

Common Ports & Requirements

The following chart is useful for troubleshooting Agent install, Static IP assignment, Remote Console connectivity, and Image transfers.

Common Ports & Requirements
Feature Method OS Source Destination Port Requirement
Agent Communication All All Node Appliance 443 DNS Resolution from node to appliance url
Agent Install All Linux Node Appliance 80 Used for appliance yum and apt repos
  SSH Linux Appliance Node 22
DNS Resolution from node to appliance url
Virtual Images configured
SSH Enabled on Virtual Image
  WinRM Windows Appliance Node 5985
DNS Resolution from node to appliance url
Virtual Images configured
WinRM Enabled on Virtual Image(winrm quickconfig)
  Cloud-init Linux      
Cloud-init installed on template/image
Cloud-init settings populated in User Settings or in Admin –> Provisioning
Agent install mode set to Cloud-Init in Cloud Settings
  Cloudbase-init Windows      
Cloudbase-init installed on template/image
Cloud-init settings populated in User Settings or in Admin –> Provisioning
Agent install mode set to Cloud-Init in Cloud Settings
  VMtools All      
VMtools installed on template
Cloud-init settings populated in Morpheus user settings or in Administration –> Provisioning when using Static IP’s
Existing User credentials entered on Virtual Image when using DHCP
RPC mode set to VMtools in VMware cloud settings.
Static IP Assignment & IP Pools Cloud-Init All      
Network configured in Morpheus (Gateway, Primary and Secondary DNS, CIDR populated, DHCP disabled)
Cloud-init/Cloudbase-init installed on template/image
Cloud-init settings populated in Morpheus user settings or in Administration –> Provisioning
  VMware Tools All      
Network configured in Morpheus (Gateway, Primary and Secondary DNS, CIDR populated, DHCP disabled)
VMtools installed on Template/Virtual Image
Remote Console SSH Linux Applaince Node 22
ssh enabled on node
user/password set on VM or Host in Morpheus
  RDP Windows Appliance Node 3389
RDP Enabled on node
user/password set on VM or Host in Morpheus
  Hypervisor Console All Appliance ESXi Host 5900-6000+
GBB server opened on all ESXii host firewalls
*Port range req’s vary per env
ESXi host names resolvable by morpheus appliance
Morpheus Catalog Image Download   All Appliance AWS S3 443 Available space at /var/opt/morpheus/
Image Transfer Stream All Appliance Datastore 443 Hypervisor Host Names resolvable by Morpheus Appliance

Installation

Morpheus comes packaged as a debian or yum based package. It can be installed on a single on/off premise linux based host or configured for high availability and horizontal scaling. Morpheus is currently only supported on Ubuntu 14.04, Ubuntu 16.04 , CentOS 7.0 or newer, and RHEL 7.0 or newer based hosts (Ubuntu is recommended).

Ubuntu

To get started installing Morpheus on Ubuntu (14.04 currently) a few prepratory items should be addressed first.

  1. First make sure the apt repository is up to date by running sudo apt-get update. It might also be advisable to verify that the assigned hostname of the machine is self resolvable.

    Important

    If the machine is unable to resolve its own hostname nslookup hostname some installation commands will be unable to verify service health during installation and fail.

  2. Next simply download the relevant .deb package for installation. This package can be acquired from your account rep or via a free trial request from morpheushub.com.

    Tip

    Use the wget command to directly download the package to your appliance server. i.e. wget https://downloads.gomorpheus.com/path/to/package.deb

  3. Next we must install the package onto the machine and configure the morpheus services:

    sudo dpkg -i morpheus-appliance_x.x.x-1.amd64.deb
    sudo morpheus-ctl reconfigure
    
  4. Once the installation is complete the web interface will automatically start up. By default it will be resolvable at https://your_machine_name and in many cases this may not be resolvable from your browser. The url can be changed by editing /etc/morpheus/morpheus.rb and changing the value of appliance_url. After this has been changed simply run:

    sudo morpheus-ctl reconfigure
    sudo morpheus-ctl stop morpheus-ui
    sudo morpheus-ctl start morpheus-ui
    

    Note

    The morpheus-ui can take 2-3 minutes to startup before it becomes available.

There are additional post install settings that can be viewed in the Advanced section of the guide.

Once the browser is pointed to the appliance a first time setup wizard will be presented. Please follow the on screen instructions by creating the master account. From there you will be presented with the license settings page where a license can be applied for use (if a license is required you may request one or purchase one by contacting your sales representative).

More details on setting up infrastructure can be found throughout this guide.

Tip

If any issues occur it may be prudent to check the morpheus log for details at /var/log/morpheus/morpheus-ui/current.

CentOS

To get started installing Morpheus on CentOS/RHEL a few preparatory items should be addressed first.

  1. Configure firewalld to allow access from users on port 80 or 443 (Or remove firewall if not required).

  2. Make sure the machine is self resolvable to its own hostname.

  3. For RHEL: In order for the guacamole service (remote console) to properly install some additional optional repositories first need added.

    • RHEL 7.x Amazon: yum-config-manager --enable rhui-REGION-rhel-server-optional
    • RHEL 7.x: yum-config-manager --enable rhel-7-server-optional-rpms
    • For Amazon users a redhat subscription is not required if the appropriate yum REGION repository is added instead as demonstrated above.

    Important

    If the machine is unable to resolve its own hostname nslookup hostname some installation commands will be unable to verify service health during installation and fail.

  4. Next simply download the relevant .rpm package for installation. This package can be acquired from your account rep or via a free trial request from morpheushub.com.

    Tip

    Use the wget command to directly download the package to your appliance server. i.e. wget https://downloads.gomorpheus.com/path/to/package.rpm

  5. Next we must install the package onto the machine and configure the morpheus services:

    sudo rpm -i morpheus-appliance-x.x.x-1.x86_64.rpm
    sudo morpheus-ctl reconfigure
    
  6. Once the installation is complete the web interface will automatically start up. By default it will be resolvable at https://your_machine_name and in many cases this may not be resolvable from your browser. The url can be changed by editing /etc/morpheus/morpheus.rb and changing the value of appliance_url. After this has been changed simply run :

    sudo morpheus-ctl reconfigure
    sudo morpheus-ctl stop morpheus-ui
    sudo morpheus-ctl start morpheus-ui
    

    Note

    The morpheus-ui can take 2-3 minutes to startup before it becomes available.

There are additional post install settings that can be viewed in the Advanced section of the guide.

Once the browser is pointed to the appliance a first time setup wizard will be presented. Please follow the on screen instructions by creating the master account. From there you will be presented with the license settings page where a license can be applied for use (if a license is required you may request one or purchase one by contacting your sales representative).

More details on setting up infrastructure can be found throughout this guide.

Tip

If any issues occur it may be prudent to check the morpheus log for details at /var/log/morpheus/morpheus-ui/current.

RHEL

To get started installing Morpheus on RHEL 7 a few prerequisite items are required.

The RedHat Enterprise Linux 7 server needs to be registered and activated with Redhat subscription. The server optional rpms repo needs to be enabled as well.

To check if the server has been actived please run the subscription-manager version. Subscription manager will return the version plus the python depency version.

If the server has not been registered and activated then the subscription manager version will return the below message.

sudo subscription-manager version
server type: This system is currently not registered
subscription management server: 0.9.51.24.-1
subscription-manager: 1.10.14-7.el7 python-rhsm: 1.10.12-2.el7

When a server has been registered and activated with Redhat the subscription manager will return the below message.

sudo subscription-manager version
server type: Red Hat Subscription Management
subscription management server: 0.9.51.24-1
subscription-manager: 1.10.14-7.el7 python-rhsm: 1.10.12-2.el7

If the subscription manager re-turns the message “This system is currently not registered” please follow the below steps to register the server.

Tip

To register the server you will need to have sudo permissions [Member of the Wheel group] or root access to the server. You will also need your Redhat registered email address and password.

subscription-manager register

sudo subscription-manager register
Username: redhat@example.com
Password: . subscription-manager auto --attach

Note

This can take a minute to complete

sudo subscription-manager attach --auto

Installed Product Current Status: Product Name: Red Hat Enterprise Linux
Server Status: Subscribed

To check to see if the RHEL server has the Red Hat Enterprise Linux 7 Server - Optional (RPMs) repo enabled please run the following command to return the repo status.

Tip

To check the server repos you will need to have sudo permissions [Member of the Wheel group] or root access to the server.

sudo yum repolist all | grep "rhel-7-server-optional-rpms" rhel-7-server-optional-rpms/7Server/x86_64 disabled

If the repo status was returned as disabled then you will need to enable the repo using the subscription manager like below.

sudo subscription-manager repos --enable rhel-7-server-optional-rpms
Repository 'rhel-7-server-optional-rpms' is enabled for this system.

The message “Repo ‘rhel-7-server-optional-rpms’ is enabled for this system.” will appear after enabling the repo. This will confirm that the repo has been enabled.

Next simply download the relevant .rpm package for installation. This package can be acquired from your account rep or via a free trial request from morpheushub.com.

Tip

Use the wget command to directly download the package to your appliance server. i.e. wget https://downloads.gomorpheus.com/path/to/package.rpm

Next we must install the package onto the machine and configure the morpheus services:

sudo rpm -i morpheus-appliance_x.x.x-1.amd64.rpm
sudo morpheus-ctl reconfigure

Once the installation is complete the web interface will automatically start up. By default it will be resolvable at https://your_machine_name and in many cases this may not be resolvable from your browser. The url can be changed by editing /etc/morpheus/morpheus.rb and changing the value of appliance_url. After this has been changed simply run:

sudo morpheus-ctl reconfigure
sudo morpheus-ctl stop morpheus-ui
sudo morpheus-ctl start morpheus-ui

Note

The morpheus-ui can take 2-3 minutes to startup before it becomes available.

There are additional post install settings that can be viewed in the Advanced section of the guide.

Once the browser is pointed to the appliance a first time setup wizard will be presented. Please follow the on screen instructions by creating the master account. From there you will be presented with the license settings page where a license can be applied for use (if a license is required you may request one or purchase one by contacting your sales representative).

More details on setting up infrastructure can be found throughout this guide.

Tip

If any issues occur it may be prudent to check the morpheus log for details at /var/log/morpheus/morpheus-ui/current.

Additional Options

There are several additional configuration options during installation that may be performed. For example, Morpheus provides convenient options for uploading your own SSL certificates as well as externalizing several dependent services.

System Defaults

Morpheus follows several install location conventions. Below is a list of system defaults for convenient management:

  • Installation Location: /opt/morpheus
  • Log Location: /var/log/morpheus
    • Morpheus-UI: /var/log/morpheus/morpheus-ui
    • MySQL: /var/log/morpheus/mysql
    • NginX: /var/log/morpheus/nginx
    • Check Server: /var/log/morpheus/check-server
    • Elastic Search: /var/log/morpheus/elsticsearch
    • RabbitMQ: /var/log/morpheus/rabbitmq
    • Redis: /var/log/morpheus/redis
  • User-defined install/config: /etc/morpheus/morpheus.rb

SSL Certificates

The default installation generates a self-signed SSL certificate. To implement a third-party certificate:

  1. Copy the private key and certificate to /etc/morpheus/ssl/your_fqdn_name.key and /etc/morpheus/ssl/your_fqdn_name.crt respectively.

  2. Edit the configuration file /etc/morpheus/morpheus.rb and add the following entries:

    nginx['ssl_certificate'] = 'path to the certificate file'
    nginx['ssl_server_key'] = 'path to the server key file'
    

    Note

    Both files should be owned by root and only readable by root, also if the server certificate is signed by an intermediate then you should include the signing chain inside the certificate file.

  3. Next simply reconfigure the appliance and restart nginx:

    sudo morpheus-ctl reconfigure
    sudo morpheus-ctl restart nginx
    

Additional Configuration Options

There are several other options available to the /etc/morpheus/morpheus.rb file that can be useful when setting up external service integrations or high availability:

mysql['enable'] = false
mysql['host'] = '52.53.240.28'
mysql['port'] = 10004
mysql['morpheus_db'] = 'morpheusdb01'
mysql['morpheus_db_user'] = 'merovingian'
mysql['morpheus_password'] = 'Wm5n5gXqXCe9v52'
rabbitmq['enable'] = false
rabbitmq['vhost'] = 'zion'
rabbitmq['queue_user'] = 'dujour'
rabbitmq['queue_user_password'] = '5tfg9n2iBifzW5c'
rabbitmq['host'] = '54.183.196.152'
rabbitmq['port'] = '10008'
rabbitmq['stomp_port'] = '10010'
redis['enable'] = false
redis['host'] = '52.53.240.28'
redis['port'] = 10009
elasticsearch['enable'] = false
elasticsearch['cluster'] = 'nebuchadnezzar'
elasticsearch['es_hosts'] = {'52.53.214.68' => 10003}

These settings allow one to externally configure and scale mysql, elasticsearch, redis, and rabbitmq which is critical for a high availability setup.

Upgrading

Morpheus provides a very simple and convenient upgrade process. In most cases it is simply a matter of installing the new package on top of itself and reconfiguring the services.

Important

All services except the morpheus-ui must be running during a reconfigure. The morpheus-ui also must be restarted or stopped and started during an upgrade. Failure to do so will result in errors.

Debian / Ubuntu

Simply download the latest package or request the latest package from your account service representative.

Then run the install process as follows:

sudo dpkg -i morpheus-appliance_x.x.x-1.amd64.deb
sudo morpheus-ctl stop morpheus-ui
sudo morpheus-ctl reconfigure
sudo morpheus-ctl start morpheus-ui

This typically is enough to complete a full upgrade. Databases will automatically be migrated upon restart of the application and service version upgrades will automatically be applied.

CentOS / RHEL

Yum based package upgrades are a little different. In this case we want to run a rpm -U command as the package manager is slightly different.

sudo rpm -U morpheus-appliance-x.x.x-1.x86_64.rpm
sudo morpheus-ctl stop morpheus-ui
sudo morpheus-ctl reconfigure
sudo morpheus-ctl start morpheus-ui

Tip

Sometimes it may be necessary to restart all appliance services on the host. In order to do this simply type sudo morpheus-ctl restart. This will restart ALL services.

Initial Appliance Setup

Appliance Setup

After installation, log into the appliance at the URL presented upon completion. An initial setup wizard walks through the first account and user creations.

  1. Enter Master Account name
    • Typically, the Master Account name is your Company name.
  2. Create Master User
    • First Name
    • Last Name
    • Username
    • Email Address
    • Password * Must be at least 8 characters longs and contain one each of the following: Uppercase letter, lowercase letter, Number, Special Character
  3. Enter Appliance Name & Appliance URL
    • The Appliance Name is used for white labeling and as a reference for multi-appliance installations.
    • The Appliance URL is the URL all provisioned instances will report back to. Example: https://example.morpheusdata.com.

The Appliance URL can be changed later, and also set to different url per cloud integration.

  1. Optionally Enable or Disable Backups, Monitoring, or Logs from this screen.

Note

You may adjust these settings from the Administration section.

Note

The Master Account name is the top-level admin account.

Note

The Master User is the system super user and will have full access privileges.

Upon completing of the initial appliance setup, you will be taken to the Admin -> Settings page, where you will add your License Key.

Login Methods

Master Tenant

Enter your username and password

Subtenant

To login, subtenants can either use the master tenant URL with subtenant\username formatting:

Example:
I have a username subuser that belongs to a tenant with the subdomain subaccount. When logging in from the main login url, I would now need to enter in: subaccount\subuser

Or use the tenant specific URL which can be found and configured under Administration > Tenants > Select Tenant > Identity Sources.

../_images/tenant_url.png

Important

In 3.4.0+ Subtenant users will no longer be able to login from the main login page without specifying their subdomain.

Add a License Key

In order to provision anything in Morpheus , a Morpheus License Key must be applied.

If you do not already have a license key, one may be requested from https://www.morpheushub.com or from your Morpheus representative.

In the Administration -> Settings section, select the LICENSE tab, paste your License Key and click “UPDATE”

../_images/license_key.png

When the license is accepted, your license details will populate in the Current License section.

If you receive an error message and your license is not accepted, please check it was copied in full and then contact your Morpheus representative. You can also verify the License Key and expiration at https://www.morpheushub.com.

Advanced Configuration

Morpheus provides more advanced configuration capabilities, including High Availability configurations, and support for tougher network environments with offline installation and Proxy configurations.

Offline Installer

For customers that have an appliance behind a firewall/proxy that does not allow downloads from our Amazon download site, you can have the offline package to add the needed packages the standard Morpheus installer would have downloaded.

Offline Installer Requirements

  • NTP should be correctly configured an the server is able to connect to the NTP server in the ntp.conf file.
  • The OS package repositories should be configured to use local LAN repository servers or the server should be able to receive packages from the configured repositories.
  • The standard Morpheus and offline packages must be downloaded from another system and transferred to the Morpheus Appliance server.

Note

The offline package is linked 1-to-1 to the appliance release. For example the offline package for 2.12.2-1 should be used with the appliance package 2.12.2-1

Offline Install

Ubuntu

  1. Download both the regular Morpheus Appliance package and the Offline Installer packages on to the appliance server:

    wget http://example_url/morpheus-appliance_package_url.deb
    wget http://example_url/morpheus-appliance_package_offline_url.deb
    
  2. Install the appliance package. DO NOT run morpheus-ctl reconfigure yet.

    sudo dpkg -i morpheus-appliance_version_amd64.deb
    
  3. Install the offline package using dpkg -i morpheus-appliance-offline_2.12.2~rc1-1_all.deb.

    sudo dpkg -i morpheus-appliance-offline_version_all.deb
    
  4. Set the Morpheus UI applaicne url (if needed, hostname will be automatically set).

    sudo vi /etc/morpheus/morpheus.rb
    edit appliance_url to resolvable url (if not configured correctly by default)
    
  5. Reconfigure the appliance to install required packages

    sudo morpheus-ctl reconfigure
    

The Chef run should complete successfully. There is a small pause when Chef runs the resource remote_file[package_name] action create while Chef verifies the checksum. After the reconfigure is complete, the morpheus-ui will start and be up in a few minutes.

Note

Tail the morpheus-ui log file with morpheus-ctl tail morpheus-ui and look for the Morpheus ascii logo to know when the morpheus-ui is up.

CentOS

  1. Download both the regular Morpheus Appliance package and the Offline Installer packages on to the appliance server:

    wget http://example_url/morpheus-appliance_package_url.noarch.rpm
    wget http://example_url/morpheus-appliance_package_offline_url.noarch.rpm
    
  2. Install the appliance package. DO NOT run morpheus-ctl reconfigure yet.

    sudo rpm -i morpheus-appliance_version_amd64.rpm
    
  3. Install the offline package using rpm -i morpheus-appliance-offline_2.12.2~rc1-1_all.rpm

    sudo rpm -i morpheus-appliance-offline_version_all.rpm
    
  4. Set the Morpheus UI applaicne url (if needed, hostname will be automatically set). Edit appliance_url to resolvable url (if not configured correctly by default)

    sudo vi /etc/morpheus/morpheus.rb
    
  5. Reconfigure the appliance to install required packages

    sudo morpheus-ctl reconfigure
    

The Chef run should complete successfully. There is a small pause when Chef runs the resource remote_file[package_name] action create while Chef verifies the checksum. After the reconfigure is complete, the morpheus-ui will start and be up in a few minutes.

Note

Tail the morpheus-ui log file with morpheus-ctl tail morpheus-ui and look for the Morpheus ascii logo to know when the morpheus-ui is up.

Proxies

Overview

In many situations , companies deploy virtual machines in proxy restricted environments for things such as PCI Compliance, or just general security. As a result of this Morpheus provides out of the box support for proxy connectivity. Proxy authentication support is also provided with both Basic Authentication capabilities as well as NTLM for Windows Proxy environments. Morpheus is even able to configure virtual machines it provisions to utilize these proxies by setting up the operating systems proxy settings directly (restricted to cloud-init based Linux platforms for now, but can also be done on windows based platforms in a different manner).

To get started with Proxies, it may first be important to configure the Morpheus appliance itself to have access to proxy communication for downloading service catalog images. To configure this, visit the Admin -> Settings page where a section labeled “Proxy Settings” is located. Fill in the relevant connection info needed to utilize the proxy. It may also be advised to ensure that the Linux environment’s http_proxy, https_proxy, and no_proxy are set appropriately.

Defining Proxies

Proxies can be used in a few different contexts and optionally scoped to specific networks with which one may be provisioning into or on a cloud integration as a whole. To configure a Proxy for use by the provisioning engines within Morpheus we must go to Infrastructure -> Networks -> Proxies. Here we can create records representing connection information for various proxies. This includes the host ip address, proxy port, and any credentials (if necessary) needed to utilize the proxy. Now that these proxies are defined we can use them in various contexts.

Cloud Communication

When morpheus needs to connect to various cloud APIs to issue provisioning commands or to sync in existing environments, we need to ensure that those api endpoints are accessible by the appliance. In some cases the appliance may be behind a proxy when it comes to public cloud access like Azure and AWS. To configure the cloud integration to utilize aa proxy, when adding or editing a cloud there is a setting called “API Proxy” under “Advanced Options”. This is where the proxy of choice can be selected to instruct the Provisioning engine how to communicate with the public cloud. Simply adjust this setting and the cloud should start being able to receive/issue instructions.

Provisioning with Proxies

Proxy configurations can vary from operating system to operating system and in some cases it is necessary for these to be configured in the blueprints as a prerequisite. In other cases it can also be configured automatically. Mostly with the use of cloud-init (which all of our out of the box service catalog utilizes on all clouds). When editing/creating a cloud there is a setting for “Provisioning Proxy” in “Provisioning Options”. If this proxy is set, Morpheus will automatically apply these proxy settings to the guest operating system.

Overriding proxy settings can also be done on the Network record. Networks (or subnets) can be configured in Infrastructure -> Networks or on the Networks tab of the relevant Cloud detail page. Here, a proxy can also be assigned as well as additional options like the No Proxy rules for proxy exceptions.

Docker

When provisioning Docker based hosts within a Proxy environment it is up to the user to configure the docker hosts proxy configuration manually. There are workflows that can be configured via the Automation engine to make this automatic when creating docker based hosts. Please see documentation on Docker and proxies for specific information.

Proxy setups can vary widely from company to company, and it may be advised to contact support for help configuring morpheus to work in the proxy environment.

Morpheus DB Migration

If your new installation is part of a migration or you need to move the data from your original Morpheus database, this is easily accomplished by using a stateful dump.

To begin this, stop the Morpheus UI on your original Morpheus server:

[root@app-server-old ~] morpheus-ctl stop morpheus-ui

Once this is done you can safely export. To access the MySQL shell we will need the password for the Morpheus DB user. We can find this in the morpheus-secrets file:

[root@app-server-old ~] cat /etc/morpheus/morpheus-secrets.json | grep morpheus_password
"morpheus_password": "451e122cr5d122asw3de5e1b", <---------------this one
"morpheus_password": "9b5vdj4de5awf87d",

Take note of the first morpheus_password as it will be used to invoke a dump. Morpheus provides embedded binaries for this task. Invoke it via the embedded path and specify the host. In this example we are using the morpheus database on the MySQL listening on localhost. Enter the password copied from the previous step when prompted:

[root@app-server-old ~] /opt/morpheus/embedded/mysql/bin/mysqldump -u morpheus -h 127.0.0.1 morpheus -p > /tmp/morpheus_backup.sql
Enter password:

This file needs to be pushed to the new Morpheus Installation’s backend. Depending on the GRANTS in the new MySQL backend, this will likely require moving this file to one of the new Morpheus frontend servers.

Once the file is in place it can be imported into the backend. Begin by ensuring the Morpheus UI service is stopped on all of the application servers:

[root@app-server-new ~] morpheus-ctl stop morpheus-ui

Then you can import the MySQL dump into the target database using the embedded MySQL binaries, specifying the database host, and entering the password for the morpheus user when prompted:

[root@app-server-new ~] /opt/morpheus/embedded/mysql/bin/mysql -u morpheus -h 10.1.2.2 morpheus -p < /tmp/morpheus_backup.sql
Enter password:

The data form the old appliance is now replicated on the new appliance. Simply start the UI to complete the process:

[root@app-server-new ~] morpheus-ctl start morpheus-ui

High Availability Configuration

Overview

Morpheus provides a wide array of options when it comes to deployment architectures. It can start as a simple one machine instance where all services run on the same machine, or it can be split off into individual services per machine and configured in a high availability configuration, either in the same region or cross-region. Naturally, high availability can grow more complicated, depending on the configuration you want to do and this article will cover the basic concepts of the Morpheus HA architecture that can be used in a wide array of configurations.

There are four primary tiers of services represented within the Morpheus appliance. They are the App Tier, Transactional Database Tier, Non-Transactional Database Tier, and Message Tier. Each of these tiers have their own recommendations for High availability deployments that we need to cover.

../_images/morpheusHA.png

Important

This is a sample configuration only. Customer configurations and requirements will vary.

Transactional Database Tier

The Transactional database tier usually consists of a MySQL compatible database. It is recommended that a lockable clustered configuration be used (Currently Percona XtraDB Cluster is the most recommended in Permissive Mode). There are several documents online related to configuring and setting up an XtraDB Cluster but it most simply can be laid out in a many master configuration. There can be some nodes setup with replication delay as well as some with no replication delay. It is common practice to have no replication delay within the same region and allow some replication delay cross region. This does increase the risk of job run overlap between the 2 regions however, the concurrent operations typically self-correct and this is a non-issue.

Non-Transactional Database Tier

The Non-Transactional tier consists of an ElasticSearch (version 5.6.10) cluster. Elastic Search is used for log aggregation data and temporal aggregation data (essentially stats, metrics, and logs). This enables for a high write throughput at scale. ElasticSearch is a Clustered database meaning all nodes no matter the region need to be connected to each other over what they call a “Transport” protocol. It is fairly simple to get setup as all nodes are identical. It is also a java based system and does require a sizable chunk of memory for larger data sets. (8gb) is recommended and more nodes can be added to scale either horizontally or vertically.

Messaging Tier

The Messaging tier is an AMQP based tier along with STOMP Protocol (used for agent communication). The primary model recommended is to use RabbitMQ for queue services. RabbitMQ is also a clustered based queuing system and needs at least 3 instances for HA configurations. This is due to elections in the failover scenarios rabbitmq can manage. If doing a cross-region HA rabbitmq cluster it is recommended to have at least 3 rabbit queue clusters per region. Typically to handle HA a RabbitMQ cluster should be placed between a load balancer and the front-end application server to handle cross host connections. The ports necessary to forward in a Rabbit MQ cluster are (5672, and 61613). A rabbitmq cluster can run on smaller memory machines depending on how frequent large requests bursts occur. 4–8gb of Memory is recommended to start.

Application Tier

The application tier is easily installed with the same debian or yum repository package that Morpheus is normally distributed with. Advanced configuration allows for the additional tiers to be skipped and leave only the “stateless” services that need run. These stateless services include Nginx, Tomcat, and Redis (to be phased out at a later date). These machines should also have at least 8gb of Memory. They can be configured across all regions and placed behind a central load-balancer or Geo based load-balancer. They typically connect to all other tiers as none of the other tiers talk to each other besides through the central application tier. One final piece when it comes to setting up the Application tier is a shared storage means is necessary when it comes to maintaining things like deployment archives, virtual image catalogs, backups, etc. These can be externalized to an object storage service such as amazon S3 or Openstack Swiftstack as well. If not using those options a simple NFS cluster can also be used to handle the shared storage structure.

../_images/morpheus-ha-multi-configuration.png

Database Tier

Installation and configuration of Percona XtraDB Cluster on CentOS/RHEL 7

Important

This is a sample configuration only. Customer configurations and requirements will vary.

Requirements

Percona requires the following ports for the cluster nodes. Please create the appropriate firewall rules on your Percona nodes.

  • 3306
  • 4444
  • 4567
  • 4568

Percona also recommends setting the selinux policy to permissive. You can temporarily set the permission to permissive by running

sudo setenforce 0

You will need to edit the selinux configuration file if you want the permission to take affect permanently which can be found in /etc/selinux/config

Add Percona Repo

  1. Add the percona repo to your Linux Distro.

    sudo yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
    
  2. Check the repo by running the below command.

    sudo yum list | grep percona
    
  3. The below commands will clean the repos and update the server.

    sudo yum clean all
    sudo yum update -y
    

Installing Percona XtraDB Cluster

  1. The below command will install the Percona XtraDB Cluster software and it’s dependences.

    sudo yum install Percona-XtraDB-Cluster-57
    
    NOTE:: During the installation you will receive the below message. Accept the Percona PGP key to install the software.
    
    retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Percona
    Importing GPG key 0xCD2EFD2A:
    Userid     : "Percona MySQL Development Team <mysql-dev@percona.com>"
    Fingerprint: 430b df5c 56e7 c94e 848e e60c 1c4c bdcd cd2e fd2a
    Package    : percona-release-0.1-4.noarch (installed)
    From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona
    Is this ok [y/N]: y
    
  2. Next we need enable the mysql service so that the service started at boot.

    sudo systemctl enable mysql
    
  3. Next we need to start mysql

    sudo systemctl start mysql
    
  4. Next we will log into the mysql server and set a new password. To get the temporary root mysql password you will need to run the below command.The command will print the password to the screen. Copy the password.

    sudo grep 'temporary password' /var/log/mysqld.log
    
  5. Login to mysql

    mysql -u root -p
    password: `enter password copied above`
    
  6. Change the root user password to the mysql db

    ALTER USER 'root'@'localhost' IDENTIFIED BY 'MySuperSecurePasswordhere';
    
  7. Create the sstuser user and grant the permissions.

    mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'M0rpheus17';
    

    Note

    The sstuser and password will be used in the /etc/my.cnf configuration.

    mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
    
    mysql> FLUSH PRIVILEGES;
    
  8. Exit mysql then stop the mysql services:

    mysql> exit
    Bye
    $ sudo systemctl stop mysql.service
    
  9. Now install the Percona software on to the other nodes using the same steps.

Once the service is stopped on all nodes move onto the next step.

Add [mysqld] to my.cnf in /etc/

  1. Copy the below contents to /etc/my.cnf. The node_name and node_address needs to be unique on each of the nodes. The first node does not require the gcomm value to be set.

    $ sudo vi /etc/my.cnf
    
    [mysqld]
    wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
    
    wsrep_cluster_name=popeye
    wsrep_cluster_address=gcomm://  #Leave blank for Master Node. The other nodes require this field. Enter the IP address of the primary node first then remaining nodes. Separating the ip addresses with commas like this 10.30.20.196,10.30.20.197,10.30.20.198##
    
    wsrep_node_name=morpheus-node01
    wsrep_node_address=10.30.20.57
    
    wsrep_sst_method=xtrabackup-v2
    wsrep_sst_auth=sstuser:M0rpheus17
    pxc_strict_mode=PERMISSIVE
    
    binlog_format=ROW
    default_storage_engine=InnoDB
    innodb_autoinc_lock_mode=2
    
  2. Save /etc/my.cnf

Bootstrapping the first Node in the cluster

Important

Ensure mysql.service is stopped prior to bootstrap.

  1. To bootstrap the first node in the cluster run the below command.

    systemctl start mysql@bootstrap.service
    

    Note

    The mysql service will start during the boot strap.

  2. To verify the bootstrap, on the master node login to mysql and run show status like 'wsrep%';

    # mysql -u root -p
    
       mysql>  show status like 'wsrep%';
       +----------------------------------+--------------------------------------+
       | Variable_name                    | Value                                |
       +----------------------------------+--------------------------------------+
       | wsrep_local_state_uuid           | 591179cb-a98e-11e7-b9aa-07df8a228fe9 |
       | wsrep_protocol_version           | 7                                    |
       | wsrep_last_committed             | 1                                    |
       | wsrep_replicated                 | 0                                    |
       | wsrep_replicated_bytes           | 0                                    |
       | wsrep_repl_keys                  | 0                                    |
       | wsrep_repl_keys_bytes            | 0                                    |
       | wsrep_repl_data_bytes            | 0                                    |
       | wsrep_repl_other_bytes           | 0                                    |
       | wsrep_received                   | 2                                    |
       | wsrep_received_bytes             | 141                                  |
       | wsrep_local_commits              | 0                                    |
       | wsrep_local_cert_failures        | 0                                    |
       | wsrep_local_replays              | 0                                    |
       | wsrep_local_send_queue           | 0                                    |
       | wsrep_local_send_queue_max       | 1                                    |
       | wsrep_local_send_queue_min       | 0                                    |
       | wsrep_local_send_queue_avg       | 0.000000                             |
       | wsrep_local_recv_queue           | 0                                    |
       | wsrep_local_recv_queue_max       | 2                                    |
       | wsrep_local_recv_queue_min       | 0                                    |
       | wsrep_local_recv_queue_avg       | 0.500000                             |
       | wsrep_local_cached_downto        | 0                                    |
       | wsrep_flow_control_paused_ns     | 0                                    |
       | wsrep_flow_control_paused        | 0.000000                             |
       | wsrep_flow_control_sent          | 0                                    |
       | wsrep_flow_control_recv          | 0                                    |
       | wsrep_flow_control_interval      | [ 100, 100 ]                         |
       | wsrep_flow_control_interval_low  | 100                                  |
       | wsrep_flow_control_interval_high | 100                                  |
       | wsrep_flow_control_status        | OFF                                  |
       | wsrep_cert_deps_distance         | 0.000000                             |
       | wsrep_apply_oooe                 | 0.000000                             |
       | wsrep_apply_oool                 | 0.000000                             |
       | wsrep_apply_window               | 0.000000                             |
       | wsrep_commit_oooe                | 0.000000                             |
       | wsrep_commit_oool                | 0.000000                             |
       | wsrep_commit_window              | 0.000000                             |
       | wsrep_local_state                | 4                                    |
       | wsrep_local_state_comment        | Synced                               |
       | wsrep_cert_index_size            | 0                                    |
       | wsrep_cert_bucket_count          | 22                                   |
       | wsrep_gcache_pool_size           | 1320                                 |
       | wsrep_causal_reads               | 0                                    |
       | wsrep_cert_interval              | 0.000000                             |
       | wsrep_ist_receive_status         |                                      |
       | wsrep_ist_receive_seqno_start    | 0                                    |
       | wsrep_ist_receive_seqno_current  | 0                                    |
       | wsrep_ist_receive_seqno_end      | 0                                    |
       | wsrep_incoming_addresses         | 10.30.20.196:3306                    |
       | wsrep_desync_count               | 0                                    |
       | wsrep_evs_delayed                |                                      |
       | wsrep_evs_evict_list             |                                      |
       | wsrep_evs_repl_latency           | 0/0/0/0/0                            |
       | wsrep_evs_state                  | OPERATIONAL                          |
       | wsrep_gcomm_uuid                 | 07c8c8fe-a998-11e7-883e-06949cfe5af3 |
       | wsrep_cluster_conf_id            | 1                                    |
       | wsrep_cluster_size               | 1                                    |
       | wsrep_cluster_state_uuid         | 591179cb-a98e-11e7-b9aa-07df8a228fe9 |
       | wsrep_cluster_status             | Primary                              |
       | wsrep_connected                  | ON                                   |
       | wsrep_local_bf_aborts            | 0                                    |
       | wsrep_local_index                | 0                                    |
       | wsrep_provider_name              | Galera                               |
       | wsrep_provider_vendor            | Codership Oy <info@codership.com>    |
       | wsrep_provider_version           | 3.22(r8678538)                       |
       | wsrep_ready                      | ON                                   |
       +----------------------------------+--------------------------------------+
        67 rows in set (0.01 sec)
    

    A table will appear with the status and rows.

  3. Next Create the Database you will be using with morpheus.

    mysql> CREATE DATABASE morpheusdb;
    
    mysql> show databases;
    
  4. Next create your morpheus database user. The user needs to be either at the IP address of the morpheus application server or use @’%’ within the user name to allow the user to login from anywhere.

    mysql> CREATE USER 'morpheusadmin'@'%' IDENTIFIED BY 'Cloudy2017';
    
  5. Next Grant your new morpheus user permissions to the database.

    mysql> GRANT ALL PRIVILEGES ON * . * TO 'morpheusadmin'@'%' IDENTIFIED BY 'Cloudy2017' with grant option;
    
    
    mysql> FLUSH PRIVILEGES;
    
  6. Checking Permissions for your user.

    SHOW GRANTS FOR 'morpheusadmin'@'%';
    

Bootstrap the Remaining Nodes

  1. To bootstrap the remaining nodes into the cluster run the following command on each node:

    sudo systemctl start mysql.service
    

    The services will automatically connect to the cluster using the sstuser we created earlier.

    Note

    Bootstrap failures are commonly caused by misconfigured /etc/my.cnf files.

Verification

  1. To verify the cluster, on the master login to mysql and run show status like 'wsrep%';

    $ mysql -u root -p
    
     mysql>  show status like 'wsrep%';
    
    +----------------------------------+-------------------------------------------------------+
     | Variable_name                    | Value                                                 |
     +----------------------------------+-------------------------------------------------------+
     | wsrep_local_state_uuid           | 591179cb-a98e-11e7-b9aa-07df8a228fe9                  |
     | wsrep_protocol_version           | 7                                                     |
     | wsrep_last_committed             | 4                                                     |
     | wsrep_replicated                 | 3                                                     |
     | wsrep_replicated_bytes           | 711                                                   |
     | wsrep_repl_keys                  | 3                                                     |
     | wsrep_repl_keys_bytes            | 93                                                    |
     | wsrep_repl_data_bytes            | 426                                                   |
     | wsrep_repl_other_bytes           | 0                                                     |
     | wsrep_received                   | 10                                                    |
     | wsrep_received_bytes             | 774                                                   |
     | wsrep_local_commits              | 0                                                     |
     | wsrep_local_cert_failures        | 0                                                     |
     | wsrep_local_replays              | 0                                                     |
     | wsrep_local_send_queue           | 0                                                     |
     | wsrep_local_send_queue_max       | 1                                                     |
     | wsrep_local_send_queue_min       | 0                                                     |
     | wsrep_local_send_queue_avg       | 0.000000                                              |
     | wsrep_local_recv_queue           | 0                                                     |
     | wsrep_local_recv_queue_max       | 2                                                     |
     | wsrep_local_recv_queue_min       | 0                                                     |
     | wsrep_local_recv_queue_avg       | 0.100000                                              |
     | wsrep_local_cached_downto        | 2                                                     |
     | wsrep_flow_control_paused_ns     | 0                                                     |
     | wsrep_flow_control_paused        | 0.000000                                              |
     | wsrep_flow_control_sent          | 0                                                     |
     | wsrep_flow_control_recv          | 0                                                     |
     | wsrep_flow_control_interval      | [ 173, 173 ]                                          |
     | wsrep_flow_control_interval_low  | 173                                                   |
     | wsrep_flow_control_interval_high | 173                                                   |
     | wsrep_flow_control_status        | OFF                                                   |
     | wsrep_cert_deps_distance         | 1.000000                                              |
     | wsrep_apply_oooe                 | 0.000000                                              |
     | wsrep_apply_oool                 | 0.000000                                              |
     | wsrep_apply_window               | 1.000000                                              |
     | wsrep_commit_oooe                | 0.000000                                              |
     | wsrep_commit_oool                | 0.000000                                              |
     | wsrep_commit_window              | 1.000000                                              |
     | wsrep_local_state                | 4                                                     |
     | wsrep_local_state_comment        | Synced                                                |
     | wsrep_cert_index_size            | 1                                                     |
     | wsrep_cert_bucket_count          | 22                                                    |
     | wsrep_gcache_pool_size           | 2413                                                  |
     | wsrep_causal_reads               | 0                                                     |
     | wsrep_cert_interval              | 0.000000                                              |
     | wsrep_ist_receive_status         |                                                       |
     | wsrep_ist_receive_seqno_start    | 0                                                     |
     | wsrep_ist_receive_seqno_current  | 0                                                     |
     | wsrep_ist_receive_seqno_end      | 0                                                     |
     | wsrep_incoming_addresses         | 10.30.20.196:3306,10.30.20.197:3306,10.30.20.198:3306 |
     | wsrep_desync_count               | 0                                                     |
     | wsrep_evs_delayed                |                                                       |
     | wsrep_evs_evict_list             |                                                       |
     | wsrep_evs_repl_latency           | 0/0/0/0/0                                             |
     | wsrep_evs_state                  | OPERATIONAL                                           |
     | wsrep_gcomm_uuid                 | 07c8c8fe-a998-11e7-883e-06949cfe5af3                  |
     | wsrep_cluster_conf_id            | 3                                                     |
     | wsrep_cluster_size               | 3                                                     |
     | wsrep_cluster_state_uuid         | 591179cb-a98e-11e7-b9aa-07df8a228fe9                  |
     | wsrep_cluster_status             | Primary                                               |
     | wsrep_connected                  | ON                                                    |
     | wsrep_local_bf_aborts            | 0                                                     |
     | wsrep_local_index                | 1                                                     |
     | wsrep_provider_name              | Galera                                                |
     | wsrep_provider_vendor            | Codership Oy <info@codership.com>                     |
     | wsrep_provider_version           | 3.22(r8678538)                                        |
     | wsrep_ready                      | ON                                                    |
     +----------------------------------+-------------------------------------------------------+
    
  2. Verify that you can login to the MSQL server by running the below command on the Morpheus Application server(s).

    mysql -u morpheusadmin -p  -h 192.168.10.100
    

    Note

    This command requires mysql client installed. If you are on a windows machine you can connect to the server using mysql work bench which can be found here https://www.mysql.com/products/workbench/

RabbitMQ Cluster

RabbitMQ Installation and Configuration

Important

This is a sample configuration only. Customer configurations and requirements will vary.

Prerequisites
yum install epel-release
yum install erlang
Install RabbitMQ on the 3 nodes
wget https://dl.bintray.com/rabbitmq/rabbitmq-server-rpm/rabbitmq-server-3.6.12-1.el7.noarch.rpm

 rpm --import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc

 yum -y install rabbitmq-server-3.6.12-1.el7.noarch.rpm

 chkconfig rabbitmq-server on

 rabbitmq-server -detached
On Node 1:
cat /var/lib/rabbitmq/.erlang.cookie

Copy this value

On Nodes 2 & 3:
  1. Overwrite /var/lib/rabbitmq/.erlang.cookie with value from previous step and change its permissions using the follow commands.

    chown rabbitmq:rabbitmq /var/lib/rabbitmq/*
    chmod 400 /var/lib/rabbitmq/.erlang.cookie
    
  2. edit /etc/hosts file to refer to shortname of node 1

    example:

    10.30.20.100 rabbit-1
    
  3. Run the commands to join each node to the cluster

    rabbitmqctl stop
    rabbitmq-server -detached
    rabbitmqctl stop_app
    rabbitmqctl join_cluster rabbit@<<node 1 shortname>>
    rabbitmqctl start_app
    
On Node 1:
rabbitmqctl add_user <<admin username>> <<password>>
rabbitmqctl set_permissions -p / <<admin username>> ".*" ".*" ".*"
rabbitmqctl set_user_tags <<admin username>> administrator
On All Nodes:
rabbitmq-plugins enable rabbitmq_stomp

Elasticsearch

Install 3 node Elasticsearch Cluster on Centos 7

Important

This is a sample configuration only. Customer configurations and requirements will vary.

Requirements

  1. Three Existing CentOS 7+ nodes accessible to the Morpheus Appliance

  2. Install Java on each node

    You can install the latest OpenJDK with the command:

    sudo yum install java-1.8.0-openjdk.x86_64
    

    To verify your JRE is installed and can be used, run the command:

    java -version
    

    The result should look like this:

    Output of java -version
    openjdk version "1.8.0_65"
    OpenJDK Runtime Environment (build 1.8.0_65-b17)
    OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)
    

Installation

  1. Download and Install Elasticsearch

    Elasticsearch can be downloaded directly from elastic.co in zip, tar.gz, deb, or rpm packages. For CentOS, it’s best to use the native rpm package which will install everything you need to run Elasticsearch. Download it in a directory of your choosing with the command:

    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.10.rpm
    

    Then install it in the usual CentOS way with the rpm command like this:

    sudo rpm -ivh elasticsearch-5.6.10.noarch.rpm
    

    This results in Elasticsearch being installed in /usr/share/elasticsearch/ with its configuration files placed in /etc/elasticsearch and its init script added in /etc/init.d/elasticsearch.

    To make sure Elasticsearch starts and stops automatically, add its init script to the default runlevels with the command:

    sudo systemctl enable elasticsearch.service
    

Note

If you manage an ElasticSearch cluster externally from Morpheus, follow the steps located on the ElasticSearch website to upgrade to the latest version compatible with Morpheus

  1. Configuring Elastic

    Now that Elasticsearch and its Java dependencies have been installed, it is time to configure Elasticsearch.

    The Elasticsearch configuration files are in the /etc/elasticsearch directory. There are two files:

    sudo vi /etc/elasticsearch/elasticsearch.yml
    
    elasticsearch.yml

    Configures the Elasticsearch server settings. This is where all options, except those for logging, are stored, which is why we are mostly interested in this file.

    logging.yml

    Provides configuration for logging. In the beginning, you don’t have to edit this file. You can leave all default logging options. You can find the resulting logs in /var/log/elasticsearch by default.

    The first variables to customize on any Elasticsearch server are node.name and cluster.name in elasticsearch.yml. As their names suggest, node.name specifies the name of the server (node) and the cluster to which the latter is associated.

    Node 1

    cluster.name: morpheusha1
    node.name: "morpheuses1"
    discovery.zen.ping.unicast.hosts: ["10.30.20.91","10.30.20.149","10.30.20.165"]
    

    Node 2

    cluster.name: morpheusha1
    node.name: "morpheuses2"
    discovery.zen.ping.unicast.hosts: ["10.30.20.91","10.30.20.149","10.30.20.165"]
    

    Node 3

    cluster.name: morpheusha1
    node.name: "morpheuses3"
    discovery.zen.ping.unicast.hosts: ["10.30.20.91","10.30.20.149","10.30.20.165"]
    

    For the above changes to take effect, you will have to restart Elasticsearch with the command:

    sudo service elasticsearch restart
    
  2. Testing

    By now, Elasticsearch should be running on port 9200. You can test it with curl, the command line client-side URL transfers tool and a simple GET request like this:

    [~]$ sudo curl -X GET 'http://10.30.20.149:9200'
          {
            "status" : 200,
            "name" : "morpheuses1",
            "cluster_name" : "morpheusha1",
            "version" : {
              "number" : "1.7.3",
              "build_hash" : "05d4530971ef0ea46d0f4fa6ee64dbc8df659682",
              "build_timestamp" : "2015-10-15T09:14:17Z",
              "build_snapshot" : false,
              "lucene_version" : "4.10.4"
            },
    

Application Tier

Morpheus configuration is controlled by a configuration file located at /etc/morpheus/morpheus.rb. This file is read when you run morpheus-ctl reconfigure after installing the appliance package. Each section is tied to a deployment tier: database is mysql, message queue is rabbitmq, search index is elasticsearch. There are no entries for the web and application tiers since those are part of the core application server where the configuration file resides.

  1. Download and install the Morpheus Appliance Package
  2. Next we must install the package onto the machine and configure the morpheus services:
sudo sudo rpm -i morpheus-appliance-x.x.x-1.x86_64.rpm
  1. After installing and prior to reconfiguring, edit the morpheus.rb file
sudo vi /etc/morpheus/morpheus.rb

Change the values to match your configured services:

Note

The values below are examples. Update hosts, ports, usernames and password with your specifications. Only include entries for services you wish to externalize.

mysql['enable'] = false
mysql['host'] = {'10.30.20.139' => 3306,  '10.30.20.153' => 3306,  '10.30.20.196' => 3306}
mysql['morpheus_db'] = 'morpheusdb'
mysql['morpheus_db_user'] = 'morpheusadmin'
mysql['morpheus_password'] = 'morpheus4admin!'
rabbitmq['enable'] = false
rabbitmq['vhost'] = 'morph'
rabbitmq['queue_user'] = 'lbuser'
rabbitmq['queue_user_password'] = 'morpheus4admin'
rabbitmq['host'] = 'morpheus-ha-mq-lb-1.den.morpheusdata.com'
rabbitmq['port'] = '5672'
rabbitmq['stomp_port'] = '61613'
rabbitmq['heartbeat'] = 50
elasticsearch['enable'] = false
elasticsearch['cluster'] = 'morpheusha1'
elasticsearch['es_hosts'] = {'10.30.20.91' => 9300, '10.30.20.149' => 9300, '10.30.20.165' => 9300}
  1. Reconfigure Morpheus
sudo morpheus-ctl reconfigure

3 Node with Externalized DB Configuration

Assumptions

This guide assumes the following:

  • There is an externalized database running for Morpheus to access.
  • The database service is a MySQL dialect (MySQL, MariaDB, Galera, etc…)
  • A database has been created for Morpheus as well as a user and proper grants have been run for the user. Morpheus will create the schema.
  • The baremetal nodes cannot access the public internet
  • The base OS is RHEL 7.x
  • Shortname versions of hostnames will be resolvable
  • All nodes have access to a shared volume for /var/opt/morpheus/morpheus-ui. This can be done as a post startup step.
  • This configuration will support the complete loss of a single node, but no more. Specifically the Elasticsearch tier requires at least two nodes to always be clustered..

Steps

  1. First begin by downloading the requisite Morpheus packages either to the nodes or to your workstation for transfer. These packages need to be made available on the nodes you wish to install Morpheus on.

    [root@app-server-1 ~]# wget https://downloads.gomorpheus.com/yum/el/7/noarch/morpheus-appliance-offline-3.1.5- 1.noarch.rpm
    [root@app-server-1 ~]# wget https://downloads.gomorpheus.com/yum/el/7/x86_64/morpheus-appliance-3.1.5- 1.el7.x86_64.rpm
    
  2. Once the packages are available on the nodes they can be installed. Make sure that no steps beyond the rpm install are run.

    [root@app-server-1 ~]# rpm -i morpheus-appliance-3.1.5-1.el7.x86_64.rpm
    [root@app-server-1 ~]# rpm -i morpheus-appliance-offline-3.1.5-1.noarch.rpm
    
  3. Next you will need to edit the Morpheus configuration file on each node.

    Node 1

    appliance_url 'https://morpheus1.localdomain'
    elasticsearch['es_hosts'] = {'10.30.20.135' => 9300, '10.30.20.136' => 9300, '10.30.20.137' => 9300}
    elasticsearch['node_name'] = 'morpheus1'
    elasticsearch['host'] = '0.0.0.0'
    rabbitmq['host'] = '0.0.0.0'
    rabbitmq['nodename'] = 'rabbit@esmort01'
    mysql['enable'] = false
    mysql['host'] = '10.130.12.228'
    mysql['morpheus_db'] = 'morpheusdb'
    mysql['morpheus_db_user'] = 'morpheus'
    mysql['morpheus_password'] = 'password'
    

    Node 2

    appliance_url 'https://morpheus2.localdomain'
    elasticsearch['es_hosts'] = {'10.30.20.135' => 9300, '10.30.20.136' => 9300, '10.30.20.137' => 9300}
    elasticsearch['node_name'] = 'morpheus2'
    elasticsearch['host'] = '0.0.0.0'
    rabbitmq['host'] = '0.0.0.0'
    rabbitmq['nodename'] = 'rabbit@esmort02'
    mysql['enable'] = false
    mysql['host'] = '10.130.12.228'
    mysql['morpheus_db'] = 'morpheusdb'
    mysql['morpheus_db_user'] = 'morpheus'
    mysql['morpheus_password'] = 'password'
    

    Node 3

    appliance_url 'https://morpheus3.localdomain'
    elasticsearch['es_hosts'] = {'10.30.20.135' => 9300, '10.30.20.136' => 9300, '10.30.20.137' => 9300}
    elasticsearch['node_name'] = 'morpheus3'
    elasticsearch['host'] = '0.0.0.0'
    rabbitmq['host'] = '0.0.0.0'
    rabbitmq['nodename'] = 'rabbit@esmort03'
    mysql['enable'] = false
    mysql['host'] = '10.130.12.228'
    mysql['morpheus_db'] = 'morpheusdb'
    mysql['morpheus_db_user'] = 'morpheus'
    mysql['morpheus_password'] = 'password'
    

Note

If you are running MySQL in a Master/Master configuration we will need to slightly alter the mysql[‘host’] line in the morpheus.rb to account for both masters in a failover configuration. As an example:

mysql['host'] = '10.130.12.228:3306,10.130.12.109'

Morpheus will append the ‘3306’ port to the end of the final IP in the string, which is why we leave it off but explicitly type it for the first IP in the string. The order of IPs matters in that it should be the same across all three Morpheus Application Servers. As mentioned, this will be a failover configuration for MySQL in that the application will only read/write from the second master if the first master becomes unavailable. This way we can avoid commit lock issues that might arise from a load balanced Master/Master.

Run the reconfigure on all nodes

[root@app-server-1 ~] morpheus-ctl reconfigure

Morpheus will come up on all nodes and Elasticsearch will auto-cluster. The only item left is the manual clustering of RabbitMQ.

Select one of the nodes to be your Source Of Truth (SOT) for RabbitMQ clustering. We need to share secrets for RabbitMQ, the erlang cookie and join the other nodes to the SOT node. Begin by copying secrets from the SOT node to the other nodes.

[root@app-server-1 ~] cat /etc/morpheus/morpheus-secrets.json
{
  "mysql": {
    "root_password": "wam457682b67858ae2cf4bc",
    "morpheus_password": "password",
    "ops_password": "98d9677686698d319r6356ae3a77"
  },
  "rabbitmq": {
    "morpheus_password": "adff00cf8714b25mc",
    "queue_user_password": "r075f26158c1fes2",
    "cookie": "6458933CD86782AD39E25"
  },
  "vm-images": {
    "s3": {
      "aws_access_id": "AKIAI6OFPBN4NWSFBXRQ",
      "aws_secret_key": "a9vxxjH5xkgh6dHgRjLl37i33rs8pwRe3"
   }
  }
 }

Then copy the erlang.cookie from the SOT node to the other nodes

[root@app-server-1 ~] cat /opt/morpheus/embedded/rabbitmq/.erlang.cookie
# 754363AD864649RD63D28

Once this is done run a reconfigure on the two nodes that are NOT the SOT nodes.

[root@app-server-2 ~] morpheus-ctl reconfigure

Note

This step will fail. This is ok, and expected. If the reconfigure hangs then use Ctrl+C to quit the reconfigure run and force a failure.

Subsequently we need to stop and start Rabbit on the NOT SOT nodes.

[root@app-server-2 ~] morpheus-ctl stop rabbitmq
[root@app-server-2 ~] morpheus-ctl start rabbitmq
[root@app-server-2 ~]#PATH=/opt/morpheus/sbin:/opt/morpheus/sbin:/opt/morpheus/embedded/sbin:/opt/morpheus/embedded/bin:$PATH
[root@app-server-2 ~]# rabbitmqctl stop_app

Stopping node 'rabbit@app-server-2' ...

[root@app-server-2 ~]# rabbitmqctl join_cluster rabbit@app-server-1 Clustering node 'rabbit@app-server-2' with 'rabbit@app-server-1' ... [root@app-server-2 ~]# rabbitmqctl start_app

Starting node 'rabbit@app-server-2' ...

Now make sure to reconfigure

[root@app-server-2 ~] morpheus-ctl reconfigure

Once the Rabbit services are up and clustered on all nodes they need to be set to HA/Mirrored Queues:

rabbitmqctl set_policy -p morpheus --priority 1 --apply-to all ha ".*" '{"ha-mode":"all"}'
[root@app-server-2 ~]# rabbitmqctl set_policy -p morpheus --priority 1 --apply-to all ha ".*" '{"ha-mode": "all"}'

The last thing to do is restart the Morpheus UI on the two nodes that are NOT the SOT node.

[root@app-server-2 ~]# morpheus-ctl restart morpheus-ui

If this command times out then run:

[root@app-server-2 ~]# morpheus-ctl kill morpheus-ui
[root@app-server-2 ~]# morpheus-ctl start morpheus-ui

You will be able to verify that the UI services have restarted properly by inspecting the logfiles. A standard practice after running a restart is to tail the UI log file.

[root@app-server-2 ~]# morpheus-ctl tail morpheus-ui

Lastly, we need to ensure that Elasticsearch is configured in such a way as to support a quorum of 2. We need to do this step on EVERY NODE.

[root@app-server-2 ~]# echo "discovery.zen.minimum_master_nodes: 2" >> /opt/morpheus/embedded/elasticsearch/config/elasticsearch.yml
[root@app-server-2 ~]# morpheus-ctl restart elasticsearch

Note

For moving /var/opt/morpheus/morpheus-ui files into a shared volume make sure ALL Morpheus services on ALL three nodes are down before you begin.

[root@app-server-1 ~]# morpheus-ctl stop

Permissions are as important as is content, so make sure to preserve directory contents to the shared volume. Subsequently you can start all Morpheus services on all three nodes and tail the Morpheus UI log file to inspect errors.

Database Migration

If your new installation is part of a migration then you need to move the data from your original Morpheus database to your new one. This is easily accomplished by using a stateful dump.

To begin this, stop the Morpheus UI on your original Morpheus server:

[root@app-server-old ~]# morpheus-ctl stop morpheus-ui

Once this is done you can safely export. To access the MySQL shell we will need the password for the Morpheus DB user. We can find this in the morpheus-secrets file:

[root@app-server-old ~]# cat /etc/morpheus/morpheus-secrets.json

Take note of this password as it will be used to invoke a dump. Morpheus provides embedded binaries for this task. Invoke it via the embedded path and specify the host. In this example we are using the Morpheus database on the MySQL listening on localhost. Enter the password copied from the previous step when prompted:

[root@app-server-old ~]# /opt/morpheus/embedded/mysql/bin/mysqldump -u morpheus -h 127.0.0.1 morpheus -p > /tmp/morpheus_backup.sql
Enter password:

This file needs to be pushed to the new Morpheus Installation’s backend. Depending on the GRANTS in the new MySQL backend, this will likely require moving this file to one of the new Morpheus frontend servers. Once the file is in place it can be imported into the backend. Begin by ensuring the Morpheus UI service is stopped on all of the application servers:

[root@app-server-1 ~]# morpheus-ctl stop morpheus-ui
[root@app-server-2 ~]# morpheus-ctl stop morpheus-ui
[root@app-server-3 ~]# morpheus-ctl stop morpheus-ui

Then you can import the MySQL dump into the target database using the embedded MySQL binaries, specifying the database host, and entering the password for the Morpheus user when prompted:

[root@app-server-1 ~]# /opt/morpheus/embedded/mysql/bin/mysql -u morpheus -h 10.130.2.38 morpheus -p < /tmp/morpheus_backup.sql
Enter password:

Recovery

If a node happens to crash most of the time Morpheus will start upon boot of the server and the services will self-recover. However, there can be cases where RabbitMQ and Elasticsearch are unable to recover in a clean fashion and it require minor manual intervention. Regardless, it is considered best practice when recovering a restart to perform some manual health

[root@app-server-1 ~]# morpheus-ctl status
run: check-server: (pid 17808) 7714s;
run: log: (pid 549) 8401s
run: elasticsearch: (pid 19207) 5326s;
run: log: (pid 565) 8401s
run: guacd: (pid 601) 8401s;
run: log: (pid 573) 8401s
run: morpheus-ui: (pid 17976) 7633s;
run: log: (pid 555) 8401s
run: nginx: (pid 581) 8401s;
run: log: (pid 544) 8401s
run: rabbitmq: (pid 17850) 7708s;
run: log: (pid 542) 8401s
run: redis: (pid 572) 8401s;
run: log: (pid 548) 8401s

But, a status can report false positives if, say, RabbitMQ is in a boot loop or Elasticsearch is up, but not able to join the cluster. It is always advisable to tail the logs of the services to investigate their health.

[root@app-server-1 ~]# morpheus-ctl tail rabbitmq
[root@app-server-1 ~]# morpheus-ctl tail elasticsearch

Output that would indicate a problem with RabbitMQ would be visible in a StackTrace and resembles this example:

../_images/HA3nodeRabbitMQ.png

And for Elasticsearch:

../_images/HA3nodeElasticSearch.png

To minimize disruption to the user interface, it is advisable to remedy Elasticsearch clustering first. Due to write locking in Elasticsearch it can be required to restart other nodes in the cluster to allow the recovering node to join. Begin by determining which Elasticsearch node became the master during the outage. On one of the two other nodes (not the recovered node):

[root@app-server-2 ~]# curl localhost:9200/_cat/nodes
app-server-1 10.130.2.13 7 47 0.21 d * morpheus1
localhost 127.0.0.1 4 30 0.32 d m morpheus2

The master is determined by identifying the row with the ‘*’ in it. SSH to this node (if different) and restart Elasticsearch.

[root@app-server-1 ~]# morpheus-ctl restart elasticsearch

Go to the other of the two ‘up’ nodes and run the curl command again. If the output contains three nodes then Elasticsearch has been recovered and you can move on to re-clustering RabbitMQ. Otherwise you will see output that contains only the node itself:

[root@app-server-2 ~]# curl localhost:9200/_cat/nodes
localhost 127.0.0.1 4 30 0.32 d * morpheus2

If this is the case then restart Elasticsearch on this node as well:

[root@app-server-2 ~]# morpheus-ctl restart elasticsearch

After this you should be able to run the curl command and see all three nodes have rejoined the cluster:

[root@app-server-2 ~]# curl localhost:9200/_cat/nodes
app-server-1 10.130.2.13 9 53 0.31 d * morpheus1
localhost 127.0.0.1 7 32 0.22 d m morpheus2
app-server-3 10.130.2.11 3 28 0.02 d m morpheus3

The most frequent case of restart errors for RabbitMQ is with epmd failing to restart. Morpheus’s recommendation is to ensure the epmd process is running and daemonized by starting it:

[root@app-server-1 ~]# /opt/morpheus/embedded/lib/erlang/erts-5.10.4/bin/epmd - daemon

And then restarting RabbitMQ:

[root@app-server-1 ~]# morpheus-ctl restart rabbitmq

And then restarting the Morpheus UI service:

[root@app-server-1 ~]# morpheus-ctl restart morpheus-ui

Again, it is always advisable to monitor the startup to ensure the Morpheus Application is starting without error:

[root@app-server-1 ~]# morpheus-ctl tail morpheus-ui

Recovery Thoughts/Further Discussion: If Morpheus UI cannot connect to RabbitMQ, Elasticsearch or the database tier it will fail to start. The Morpheus UI logs can indicate if this is the case.

Aside from RabbitMQ, there can be issues with false positives concerning Elasticsearch’s running status. The biggest challenge with Elasticsearch, for instance, is that a restarted node has trouble joining the ES cluster. This is fine in the case of ES, though, because the minimum_master_nodes setting will not allow the un-joined singleton to be consumed until it joins. Morpheus will still start if it can reach the other two ES hosts, which are still clustered.

The challenge with RabbitMQ is that it is load balanced behind Morpheus for requests, but each Morpheus application server needs to boostrap the RabbitMQ tied into it. Thus, if it cannot reach its own RabbitMQ startup for it will fail.

Similarly, if a Morpheus UI service cannot reach the database, startup will fail. However, if the database is externalized and failover is configured for Master/Master, then there should be ample opportunity for Morpheus to connect to the database tier.

Because Morpheus can start even though the Elasticsearch node on the same host fails to join the cluster, it is advisable to investigate the health of ES on the restarted node after the services are up. This can be done by accessing the endpoint with curl and inspecting the output. The status should be “green” and number of nodes should be “3”:

[root@app-server-1 ~]# curl localhost:9200/_cluster/health?pretty=true
{
"cluster_name" : "morpheus",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 110,
"active_shards" : 220,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}

If this is not the case it is worth investigating the Elasticsearch logs to understand why the singleton node is having trouble joining the cluster. These can be found at:

/var/log/morpheus/elasticsearch/current

Outside of these stateful tiers, the “morpheus-ctl status” command will not output a “run” status unless the service is successfully running. If a stateless service reports a failure to run, the logs should be investigated and/or sent to Morpheus for additional support. Logs for all Morpheus embedded services are found below:

/var/log/morpheus

Morpheus CLI

Installing on Linux

The Morpheus CLI is a ruby based CLI that provides a lot of functionality out of the box and is rapidly growing in coverage to be able to perform every task that can be performed in the Morpheus UI. It is also a great way to get started in exploring the Morpheus API and understanding some of the data model aspects.

Installation

A Prerequisite to running the CLI is to have ruby 2.2.0+ installed (2.3.0 recommended). To install RUby please follow these instructions:

ruby_prerequisite

Once the ruby runtime is installed simply use rubygems to install the CLI

gem install morpheus-cli

Once the gem is installed all cli commands can be run on the shell via morpheus.

Installing on Windows

The morpheus cli is capable of running on many platforms due to its ruby runtime. This includes windows based platforms. To get started, we must first ensure ruby is running on the windows machine in question. To do this please visit ruby_prerequisite and download at least Ruby version 2.2.0 (2.3.3 recommended).

Note

When installing ruby on windows, make sure the options are selected for adding the ruby binaries to your PATH.

Now that ruby is installed, simply open a PowerShell window and run

gem install morpheus-cli --no-ri --no-rdoc

A list of installed dependencies should start sliding by the screen. Once this has completed the CLI setup is complete. Now all that must be done is configuring the cli to point to an appliance for use.

morpheus remote add myapp https://applianceUrl
morpheus remote use myapp
morpheus login

Credentials are used to acquire an access token which is then stored in the users home directory in a folder called .morpheus. Now all commands provided by the CLI are available for use just as if running in a *nix based environment.

Setup

The first thing that needs to be done after installing the cli is pointing the cli to the appliance. The CLI can be pointed at many appliances and uses the RESTful OAUTH public developer apis to perform tasks. To set this up simply add a remote appliance with the morpheus remote add command.

morpheus remote add myappliance https://applianceUrl
morpheus remote use myappliance
morpheus login

There are several commands available when dealing with configuration of remote appliances. To see what commands are available just type

morpheus remote

Getting Started

To get started with the morpheus CLI its helpful to use morpheus shell. The shell provides a handy shell with history and some autocomplete features for learning to use it. All commands mentioned prefixed with morpheus can be omitted since we are in shell mode.

To confirm that we are hooked into the appliance properly lets check our authentication information:

morpheus> whoami
Current User
==================

ID: 1
Account: Labs (Master Account)
First Name: Demo
Last Name: Environment
Username: david
Email: david@morpheusdata.com
Role: System Admin

Remote Appliance
==================

Name: demo
Url: https://demo.morpheusdata.com
Build Version: 2.10.0

Fantastic! We are now ready to start our adventure in the Morpheus CLI. If this command fails please be sure to verify the appliance url entered previously is correct, and also verify the provided credentials are correctly entered.

While the CLI is relatively young there are a ton of features provided with it that can make it very convenient for working with morpheus. There are several base commands with subcommands within for example. Lets look at what happens when we simply type morpheus on the command line:

Usage: morpheus [command] [options]

Commands:
    remote
    login
    logout
    whoami
    groups
    clouds
    hosts
    load-balancers
    shell
    tasks
    workflows
    deployments
    instances
    apps
    app-templates
    deploy
    license
    instance-types
    security-groups
    security-group-rules
    accounts
    users
    roles
    key-pairs
    virtual-images
    library
    version

As you can see the cli is split into sections. Each of. these sections has subcommands available for performing certain actions. For example lets look at morpheus instances

morpheus> instances
Usage: morpheus instances [list,add,remove,stop,start,restart,backup,run-workflow,stop-service,start-service,restart-service,resize,upgrade,clone,envs,setenv,delenv] [name]

These commands typically make it easier to figure out what command subsets are available and the CLI documentation can provide helpful information in more depth on each command option.

Provisioning

To get started provisioning instances from the CLI a few prerequisite commands must be setup in the CLI. First we must decide what Group we want to provision into. We can first get a list of available groups to use by running morpheus groups list

morpheus> groups list

Morpheus Groups
==================


=  Automation - denver
=> Demo - Multi
=  Morpheus AWS - US-West
=  Morpheus Azure - US West
=  Morpheus Google - Auto
=  morpheus-approvals -
=  NIck-Demo - Chicago
=  San Mateo Hyper-V - San Mateo, CA
=  San Mateo Nutanix - San Mateo, CA
=  San Mateo Openstack - San Mateo, CA
=  San Mateo Servers - San Mateo, CA
=  San Mateo UCS - San Mateo, CA
=  San Mateo Vmware - San Mateo, CA
=  San Mateo Xen - San Mateo, CA
=  snow-approvals -
=  SoftLayer - Dallas-9

In the above example the currently active group is Demo as can be seen by the => symbol to the left of the group name. To switch groups simply run:

morpheus groups use "San Mateo Xen"

This now becomes the active group we would like to provision into. Another thing to know before provisioning is we do have to also specify the cloud we want to provision into . This does require the cloud be in the group that is currently active. To see a list of clouds in the relevant group simply run:

morpheus clouds list -g [groupName]

This will scope the clouds command to list only clouds in the group specified.

Morpheus makes it very easy to get started provisioning via the CLI. It provides a list of instance-types that can be provisioned via the instance-types list command. Lets get started by provisioning an ubuntu virtual machine.

morpheus> instances add

Usage: morpheus instances add TYPE NAME
  -g, --group GROUP                Group
  -c, --cloud CLOUD                Cloud
  -O, --option OPTION              Option
  -N, --no-prompt                  Skip prompts. Use default values for all optional fields.
  -j, --json                       JSON Output
  -d, --dry-run                    Dry Run, print json without making the actual request.
  -r, --remote REMOTE              Remote Appliance
  -U, --url REMOTE                 API Url
  -u, --username USERNAME          Username
  -p, --password PASSWORD          Password
  -T, --token ACCESS_TOKEN         Access Token
  -C, --nocolor                    ANSI
  -V, --debug                      Print extra output for debugging.
  -h, --help                       Prints this help
morpheus> instances add ubuntu MyInstanceName -c "San Mateo Vmware"

morpheus> instances add ubuntu -c "San Mateo Vmware" dre-test
Layout ['?' for options]: ?
* Layout [-O layout=] - Select which configuration of the instance type to be provisioned.

Options
===============
* Docker Ubuntu Container [104]
* VMware VM [105]
* Existing Ubuntu [497]


Layout ['?' for options]: VMware VM
Plan ['?' for options]: ?
* Plan [-O servicePlan=] - Choose the appropriately sized plan for this instance

Options
===============
* Memory: 512MB Storage: 10GB [10]
* Memory: 1GB Storage: 10GB [11]
* Memory: 2GB Storage: 20GB [12]
* Memory: 4GB Storage: 40GB [13]
* Memory: 8GB Storage: 80GB [14]
* Memory: 16GB Storage: 160GB [15]
* Memory: 24GB Storage: 240GB [16]
* Memory: 32GB Storage: 320GB [17]


Plan ['?' for options]: 10
Root Volume Label [root]:
Root Volume Size (GB) [10]:
Root Datastore ['?' for options]: ?
* Root Datastore [-O rootVolume.datastoreId=] - Choose a datastore.

Options
===============
* Auto - Cluster [autoCluster]
* Auto - Datastore [auto]
* cluster: labs-ds-cluster - 2.9TB Free [19]
* store: ds-130-root - 178.5GB Free [5]
* store: ds-130-vm - 699.0GB Free [6]
* store: ds-131-root - 191.3GB Free [1]
* store: ds-131-vm - 798.9GB Free [9]
* store: ds-132-root - 191.2GB Free [4]
* store: ds-132-vm - 799.4GB Free [10]
* store: ds-177-root - 399.4GB Free [3]
* store: labs-vm - 2.9TB Free [18]
* store: VeeamBackup_WIN-0JNJSO32KI4 - 5.1GB Free [8]
* store: VeeamBackup_WIN-QGARB6FA1GQ - 2.7GB Free [17]


Root Datastore ['?' for options]: Auto - Cluster
Add data volume? (yes/no): no
Network ['?' for options]: VM Network
Network Interface Type ['?' for options]: E1000
IP Address: Using DHCP
Add another network interface? (yes/no): no
Public Key (optional) ['?' for options]:
Resource Pool ['?' for options]: ?
* Resource Pool [-O config.vmwareResourcePoolId=] -

Options
===============
* Resources [resgroup-56]
* Resources / Brian [resgroup-2301]
* Resources / Brian / Macbook [resgroup-2302]
* Resources / David [resgroup-2158]
* Resources / David / Macbook [resgroup-2160]

Resource Pool ['?' for options]: resgroup-2160

As can be seen in the example above, the CLI nicely prompts the user for input on required options for provisioning this particular instance type within this particular cloud. It provides capabilities of adding multiple disks and multiple networks in this scenario. It is also posslbe to skip these prompts and provision everything via one command line syntax by using the -O optionName=value syntax:

morpheus> instances add ubuntu MyInstanceName -c "San Mateo Vmware"  -O layout=105 -O servicePlan=10 -O rootVolume.datastoreId=autoCluster

This will cause morpheus cli to skip prompting for input on these prompts. All inputs have an equivalent -O option that can be passed. To see what that option argument is simply enter ? on the input prompt to get specifics.

Now your VM should be provisioning and status can be checked by simply typing morpheus instances list.

List Arguments

Most of the list command types can be queried or paged via the cli. To do this simply look at the help information for the relevant list command

morpheus> instances list -h
Usage: morpheus [options]
-g, --group GROUP                Group Name
-m, --max MAX                    Max Results
-o, --offset OFFSET              Offset Results
-s, --search PHRASE              Search Phrase
-S, --sort ORDER                 Sort Order
-D, --desc                       Reverse Sort Order
-j, --json                       JSON Output
-r, --remote REMOTE              Remote Appliance
-U, --url REMOTE                 API Url
-u, --username USERNAME          Username
-p, --password PASSWORD          Password
-T, --token ACCESS_TOKEN         Access Token
-C, --nocolor                    ANSI
-V, --debug                      Print extra output for debugging.
-h, --help                       Prints this help

Ruby Installation

Step 1 – Installing Requirements

First of all, we need to install all required packages for ruby installation on our system using the following command.

yum install gcc-c++ patch readline readline-devel zlib zlib-devel libyaml-devel libffi-devel openssl-devel make bzip2 autoconf automake libtool bison iconv-devel sqlite-devel

Step 2 – Install RVM

Install the latest stable version of RVM on your system using the following command. This command will automatically download all required files and install on your system.

curl -sSL https://rvm.io/mpapis.asc | gpg --import -
curl -L get.rvm.io | bash -s stable

Also, run below command to load the RVM environment.

source /etc/profile.d/rvm.sh
rvm reload

Step 3 – Verify Dependencies

Now use the following command to verify all dependencies are properly installed. This will install any missing dependencies on your system.

rvm requirements run
Checking requirements for centos.
Requirements installation successful.

Step 4 – Install Ruby 2.5

After completing setup of RVM environment lets install Ruby language using the following command. Change Ruby version to below command you need to install.

rvm install 2.5
[Sample Output]

Searching for binary rubies, this might take some time.
No binary rubies available for: centos/7/x86_64/ruby-2.5.1.
Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies.
Checking requirements for centos.
Requirements installation successful.
Installing Ruby from source to: /usr/local/rvm/rubies/ruby-2.5.1, this may take a while depending on your cpu(s)...
ruby-2.5.1 - #downloading ruby-2.5.1, this may take a while depending on your connection...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13.3M  100 13.3M    0     0   866k      0  0:00:15  0:00:15 --:--:--  823k
ruby-2.5.1 - #extracting ruby-2.5.1 to /usr/local/rvm/src/ruby-2.5.1.....
ruby-2.5.1 - #configuring..................................................................
ruby-2.5.1 - #post-configuration..
ruby-2.5.1 - #compiling....................................................................
ruby-2.5.1 - #installing.............................
ruby-2.5.1 - #making binaries executable..
ruby-2.5.1 - #downloading rubygems-2.7.7
ruby-2.5.1 - #extracting rubygems-2.7.7.....................................................
ruby-2.5.1 - #removing old rubygems........
ruby-2.5.1 - #installing rubygems-2.7.7................................
ruby-2.5.1 - #gemset created /usr/local/rvm/gems/ruby-2.5.1@global
ruby-2.5.1 - #importing gemset /usr/local/rvm/gemsets/global.gems...................................................
ruby-2.5.1 - #generating global wrappers.......
ruby-2.5.1 - #gemset created /usr/local/rvm/gems/ruby-2.5.1
ruby-2.5.1 - #importing gemsetfile /usr/local/rvm/gemsets/default.gems evaluated to empty gem list
ruby-2.5.1 - #generating default wrappers.......
ruby-2.5.1 - #adjusting #shebangs for (gem irb erb ri rdoc testrb rake).
Install of ruby-2.5.1 - #complete
Ruby was built without documentation, to build it run: rvm docs generate-ri

Step 5 – Setup Default Ruby Version

First of all, check the currently installed ruby versions on your system. So that we can find which version is using currently by the system and which is set to default.

rvm list
ruby-2.3.5 [ x86_64 ]
   ruby-2.4.2 [ x86_64 ]
* ruby-2.4.4 [ x86_64 ]
=> ruby-2.5.1 [ x86_64 ]
# => - current
# =* - current && default
#  * - default

After that use rvm command to set up the default ruby version to be used by applications.

rvm use 2.5 --default
Using /usr/local/rvm/gems/ruby-2.5.1

Step 6 – Verify Active Ruby Version

Using following command you can check the current ruby version is used.

ruby --version
ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux]

Step 7 – Install Morpheus CLI

gem install morpheus-cli

Morpheus Agent

The Morpheus Agent is an important and powerful facet of Morpheus as a orchestration tool. Though it is not required (one unique capability of our platform vs. some of the competitors out there), it is recommended for use as it brings with it a lot of insightful benefits. Not only does it provide statistics of the guest operating system and resource utilization, it also brings along with it monitoring and log aggregation capabilities. After an initial brownfield discovery users can decide to convert unmanaged vms to managed. The Morpheus Agent is very lightweight and secure.

Note

The agent is not required by Morpheus to become a managed instance. If you don’t have the agent installed we try to aggregate stats but it can vary based on the cloud and can be limited or inaccurate.

The Morpheus Agent does not open any inbound network ports but rather only opens an outbound connection back to the Morpheus appliance over port 443 (https or wss protocol). This allows for a bidirectional command bus where instructions can be sent to orchestrate a workload without needing access to things like SSH or WinRM. The tool can even be installed at provision time via things like cloud-init, such that the Morpheus appliance itself doesn’t even need direct network access to the VLAN under which the workload resides. By doing this we address many of the network security concerns that come up with regards to the agent while demonstrating its security benefits as well as analytics benefits. We can even use this statistical data at the guest OS level rather than the hypervisor level to provide extremely precise right-sizing recommendations.

Key Agent Features

  • Provides key enhanced statistics (disc usage, CPU usage, network, disc IO)
  • Handles log aggregation
  • Provides a command bus to where Morpheus doesn’t need to get credentials to access a box. Can still run workflows if credentials are changed
  • SSH agent can be disabled and still get access to the box
  • Agent can be installed over Cloud Init for internetless situations
  • The Morpheus agent is optional
  • Makes a single connect that’s persistence over HTTPs web socket and runs as a service
  • Health checks for Linux (not available on windows)
  • No inbound Ports
  • Agent buffers and compresses logs and sends them in chunks to minimize packets
  • Can be configured to collect logs and send them somewhere
  • Linux agent can be shrunk and should be less then .2% peak (Windows less 97 kb)
  • Run workflows, Have expiration/shutdown policies and can help reign in environments amongst other things
  • Accepts commands, can execute commands, write files, and manipulate firewall