Installing on Kubernetes / Openshift
Preparing the Kubernetes namespace
The deploy directory referenced below can be found here
-
Search for the Custom Resource Definition (CRD) and create it if it doesn’t already exist.
$ oc get crds | grep manageiqs.manageiq.org $ oc create -f deploy/crds/manageiq.org_manageiqs_crd.yaml
-
Set up RBAC.
$ oc create -f deploy/role.yaml $ oc create -f deploy/role_binding.yaml $ oc create -f deploy/service_account.yaml
-
Deploy the operator in your namespace.
$ oc create -f deploy/operator.yaml
New Installation
Create the Custom Resource
Make any necessary changes (i.e applicationDomain) that apply to your environment then create the CR (Custom Resource). The operator will take action to deploy the application based on the information in the CR. It may take several minutes for the database to be initialized and the workers to enter a ready state.
$ oc create -f deploy/crds/manageiq.org_v1alpha1_manageiq_cr.yaml
Migrating from Appliances
Notes
-
Multi-server / multi-zone: Current architecture in podified limits us to running a single server and zone in a Kubernetes namespace. Therefore, when migrating from a multi-appliance and/or multi-zone appliance architecture, you will need to choose a single server to assume the identity of. This server should have the UI and web service roles enabled before taking the database backup to ensure that those workers will start when deployed in the podified environment. All desired roles and settings will need to be configured on this server.
-
Multi-region: Multi-region deployments are slightly more complicated in a podified environment since postgres isn’t as easily exposed outside the project / cluster. If all of the region databases are running outside of the cluster or all of the remote region databases are running outside of the cluster and the global database is in the cluster, everything is configured in the same way as appliances. If the global region database is migrated from an appliance to a pod, the replication subscriptions will need to be recreated. If any of the remote region databases are running in the cluster, the
postgresql
service for those databases will need to be exposed using a node port. To publish the postgresql service on a node port, patch the service using$ kubectl patch service/postgresql --patch '{"spec": {"type": "NodePort"}}'
. Now you will see the node port listed (31871 in this example) as well as the internal service port (5432). This node port can be used along with the IP address of any node in the cluster to access the postgresql service.$ oc get service/postgresql NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE postgresql NodePort 192.0.2.1 <none> 5432:31871/TCP 2m
Collect data from the appliance
-
Take a backup of the database
$ pg_dump -Fc -d vmdb_production > /root/pg_dump
-
Export the encryption key and Base64 encode it for the Kubernetes Secret.
$ vmdb && rails r "puts Base64.encode64(ManageIQ::Password.v2_key.to_s)"
-
Get the region number
$ vmdb && rails r "puts MiqRegion.my_region.region"
-
Get the GUID of the server that you want to run as.
$ vmdb && cat GUID
Restore the backup into the kubernetes environment
-
Create a YAML file defining the Custom Resource (CR). Minimally you’ll need the following:
apiVersion: manageiq.org/v1alpha1 kind: ManageIQ metadata: name: <friendly name for you CR instance> spec: applicationDomain: <your application domain name> databaseRegion: <region number from the appliance above> serverGuid: <GUID value from appliance above>
-
Create the CR in your namespace. Once created, the operator will create several additional resources and start deploying the app.
$ oc create -f <file name from above>
-
Edit the app secret inserting the encryption key from the appliance. Replace the “encryption-key” value with the value we exported from the appliance above.
$ oc edit secret app-secrets
-
Find the orchestrator pod and start a debug session into it. Keep this running in the background…
$ oc get pods -o name | grep orchestrator $ oc debug pod/orchestrator-123456abcd-789ef
-
Temporarily prevent the orchestrator from starting by adding the following to the deployment:
$ oc edit deployment/orchestrator spec: template: spec: nodeSelector: kubernetes.io/hostname: nope
-
Delete the old replica set, the new one will sit in “pending” state.
$ oc delete replicaset.apps/orchestrator-123456abcd
-
Back in the debug pod from step 4:
$ cd /var/www/miq/vmdb $ source ./container_env $ DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:drop db:create
-
oc rsh
into the database pod and restore the database backup$ cd /var/lib/pgsql # --- download your backup here --- $ pg_restore -d vmdb_production <your_backup_file> $ rm -f <your_backup_file> $ exit
-
Back in the debug pod from step 4:
$ rake db:migrate $ exit
-
Delete the node selector that we added above
oc edit deployment/orchestrator
removing:spec: template: spec: nodeSelector:
-
Delete the pending orchestrator deployment
$ oc delete replicaset.apps/orchestrator-98765cba
Done! The orchestrator will start deploying the rest of the pods required to run the application.
External postgresql
Running with an external Postgres server is an option, if you want the default internal Postgres you can skip this step. Additionally, if you want to secure the connection, you need to include the optional parameters sslmode=verify-full
and rootcertificate
when you create the secret. To do this, manually create the secret and substitute your values before you create the CR.
$ oc create secret generic postgresql-secrets \
--from-literal=dbname=vmdb_production \
--from-literal=hostname=YOUR_HOSTNAME \
--from-literal=port=5432 \
--from-literal=password=YOUR_PASSWORD_HERE \
--from-literal=username=YOUR_USERNAME_HERE \
--from-literal=sslmode=verify-full \ # optional
--from-file=rootcertificate=path/to/optional/certificate.pem # optional
Note: If wanted, the secret name is also customizable by setting databaseSecret
in the CR.