0
0

Cloud Manager Reference Manual

Docs
Docs EInnovator Posted 13 May 20

Cloud Manager » Deployments

The core functionality of Kubernetes is the ability to deploy applications and service workloads as containers. Several abstractions are provided to support different variations or kinds of deployment, including: Deployments, Pods, ReplicaSet, StatefulSet, Service, and other. EInnovator CloudManager provides full transparent access to all K8s abstractions, with some simplifications to common functionality and use cases. CloudManager gives access and control over all deployments in a K8s namespaces, but additionally and optionally supports managed Deployments. For Deployments managed by Cloud Managar several high-level operation are supported, such as the automatic creation and management of related resources. Managed Deployment have its configuration state saved in CloudManager own database, in addition to the Kubernetes cluster etcd “database”. This introduces some flexibility in support for management operation, while preserving all behavior of K8s in a cluster.

For quick reference, we list below a summary of the different kinds of deployments in Kubernetes:

  • Pod — a unit of deployment (packaging one or more deployments)
  • ReplicaSet — a set of replicas (Pods) of the same stateless application with same version/generation number (used by Deployment)
  • Deployment — a deployment unit for stateless application, that manages availability Pods and upgrades by creating ReplicaSet
  • StatefulSet — a deployment unit for stateful services (e.g. DBs, stateful message-brokers). Similar to Deployment but with awareness of Pod identity sutiable for stategul services
  • DaemonSet — a set of Pods (with same container images) to run one per cluster node
  • Service — a request dispatching abstraction allowing multiple Pods (of a Deployment/ReplicaSet or StatefulSet) to be addressed by other apps in a transparent way

Two of the most common workflows when using CloudManager and K8s, is to deploy stateless applications using a Deployment and a Service, and statefull service cases a StatefulSet and Service.

The term deployment is often used interchangeably to refer both the K8s Deployment abstraction and any kind of deployment in a more general and loose sense. Context and wording should make clear the sense the term is being used in each case.

Approaches to Deployment

Deployments can be setup in severals way using either K8s command-line tools, such as kubectl and Helm, and CloudManager UI/UX. Table below, summaries the different approaches.

ApproachToolExampleDescription / Common Use Cases
Command-Linekubectlkubectl create deployment name --image=...Command-line deployment (without manifest file)
Manifest File (YAML)kubectlkubectl -f manifest.yaml apply ...Command-line deployment with one (or more) manifest file(s) (vanilla or custom created)
Helm ChartHelmhelm repo add ...
helm install myrelease somechart
Command line install with Helm tool of a chart (a packaged and configurable set manifest files)
Managed DeploymentCloudManagersee UI snapshoots belowUsing the Web UI/UX provided by CloudManager to create a managed Deployment
Managed Deployment with Custom ManifestCloudManagersee UI snapshoots belowUse a pre-available manifest file as starting point to create a managed Deployment in CloudManager
Managed SolutionCloudManagersee UI snapshoots in other parts of this documentationInstall solutions package as Helm charts or other format from a configured Solution Repository in CloudManager

Listing Deployments

The details page of a Space in CloudManager has multiple tabs for different kinds of resources. For convenience, the Deployments tab integrates several kinds of K8s Deployments, including: Deployment, ReplicaSet, StatefulSet and DaemonSet. A UI filter allows to customize the listing using several criteria, including kind of deployment, name, and status.

Each item in list display basic details about a Deployment, including: name and display name, kind, docker image (of first container in Pod), icon, labels and annotations, status, last modified date. For Deployment managed by Cloud Manage, it also displays the DNS routes/ingress of the associated Service.

Image below shows a snapshots of the UI with list of Deployments in a Space.

Deployment List

The list of Pods and Services in Space are displayed in a separate tabs. Each item in this list display basic information about the resource, about the same as for Deployments. For Service, additional information is displayed including the port mapping configured for a service, and the set of Pod endpoints the service will dispatch requests to.

Images below illustrates a Pod and Service list.

Pod List Service List

Deploying a Docker Image

CloudManager provides a convenient UI/UX to deploy docker containers. Button Deploy in a Space details dashboard, opens a form where details about the deployment can be specified.

Image below shows a snapshots of the UI for creation of a new Deployment using nginx Docker image.

Deploy Image

This documentation uses the term Managed Deployment for deployments created with the New Deployment form in the CloudManager UI. Non-managed deployments include the ones installed as part of packaged solutions (e.g. as an Helm Chart), installed with manifest files using kubectl or any other deployed by other means (e.g. already existing in a namespace before connecting to the cluster with CloudManager).

The minimum information required to deploy an application is to select a Docker image from a image Registry. Docker Hub is auto-configured in CloudManager as a image registry from where public images can be pulled. Additional Registries can be configured for an account, to allow pulling images from elsewhere including from private repositories. For private repositories the Docker naming conventions (syntax) for images, applied, namely domain/image for private image repositories in DockerHub and registry-host/domain/image for images in other pulled form other registries.

A name should also be specified for the deployment, unique in the Space/Namespace, and optionally a display name. A display name is automatically suggested from the image name, and a name is automatically suggested from the display name (e.g. the display name My DB Service, suggests name my-db-service). Names need to follow K8s requirements for object IDs, namely, should be made of lower-case alphanumeric chars or special char -, start and end with alphanumeric char, with maximum length of 64 chars.

CloudManager also allow an icon to be uploaded for a Deployment. The upload file is stored in a Document/File Store, information about the icon URL is saved in CloudManager own database, and as an annotation named @icon when the deployment is created in the cluster. This icon is used to make pretty display of details and listing. The @icon annotation if provided in a Deployment, Pod or Service, is used to display an icon, even if the deployment was not created with CloudManager form (e.g. marketplace solution package as an helm chart, or a manifest file).

The default type of deployment created by CloudManager when using this form, is Deployment, but other kind can selected using a combo-box selector, namely Pod, ReplicaSet, StatefulSet, Pod, and DaemonSet.

For kind Deployment, CloudManager allow (optional) the creation of a Service and a DNS Ingress/Route to dispatch request to the Pods managed by the Deployment. The Service will have the same as the Deployment.

The DNS Ingress/Route is specified by selecting a hostname to concat to a domain name. Domain names are configured elsewhere. The administrator of the CloudManager will typically configure one or more global domain names. For example, in EInnovator Public Cloud the domain names *.{username|groupname}.einnovator.cloud, *.{username|groupname}.nativex.cloud, are configured automatically for each user and group (organization).

For kind Deployment, ReplicaSet, and StatefulSet, the number of Pods or instances to be created is also specified. The default is 1 Pod/Instance, but any (non-negative) integer number can be specified. The number of instances can also be scaled up/down in the after the Deployment is created and the application running. (a.k.a horizontal scaling)

The terms Pod, Instance, and Replica are often used interchangeably to refer to the Pods managed by a Deployment or StafulSet.

The resources associated with each Pod, Memory/Ephemeral-Storage, can also be specified. The default memory selection is 1Gi (1024 Mi/Megabytes). The default Ephemeral-Storage is also 1Gi (this may be minimum allowed in some clusters). Alternative values can be specified. The size of resources for the instances can be scaled up/down after the Deployment is created and the application is running. (a.k.a vertical scaling) The units of storage supported are as follow:

  • Mi — Megabyte (2^10=1024 Kb)
  • Gi — Gigabyte (2^10=1024 Mi)
  • M — Decimal Megabyte (1000 Kb)
  • G — Decimal Gigabyte (1000 Mb)

Deployment Details

After a Deployment is created, the CloudManager UI displays a dashboard page with all details. Similarly to elsewhere, the UI is structured in several tab each display different pieces of information. The main tab display a list with all the Pods/Instance, with status information, and other basic details, such as the selected image and resource and instance settings. It also provides a toolbar to start/stop/restart a Deployment. Stopping a deployment is equivalent to set the instance count to 0.

Image below show a snapshot of the UI just after deploying a nginx image from a public repository hosted in Docker Hub registry, with a single instance, with 64Mb of memory and 1Gi of storage.

Deployment Details

Scaling Deployments

Deployment can be horizontally scaled by resetting the instance count. A edit button in the instance count panel can be used to open an editor for the instance count. Once the scaling request is submited, CloudManager tracks the progress and keep updating the instance list and count indicators.

Image below shows a snapshot of the UI while editing the instance count of a nginx Deployment to 5, and other images show the end-result with an update list of Pods with the original one plus 4 additional ones created to scale-up.

Scaling Instances Scaled Deployment

Naturally, the same way a Deployment can be scaled up in the number of instance, it can also be scaled down. Image below shows the progress while the instance count is being scaled down back to 1.

Scaling Down Deployment

Deployment Meta-Data

A Deployment is submitted to the Space/Namespace of K8s cluster by specifying all the details how the deployment should orchestrated and configured, including generic meta-data, the containers to created inside the Pods, resources allocation and instance count, and other allocation and control parameters. This is about the same information is specified in YAML manifest files, except that is done by tools such as kubectl, helm, and CloudManger via internal data-structures and communication with the K8s cluster API controller.

It is often useful to inspect the meta-data and specification provided to the cluster (e.g. for trouble-shooting, reviewing, planning upgrades, or auditing purposes). This information is available in the CloudManager UI in the MetaData tab. For managed deployments this information is split in three sub-tabs: one for the Deployment, one for the associated Service, and one for the current set of Pods.

Images below shows a snapshots of the UI tabs displaying this meta-data.

Deployment Meta Deployment Meta Deployment Meta

Pod Logs

Pod logs are a key piece of information and mechanism to trouble-shoot, develop, test, monitor and audit application. When running as part of a Deployment/ReplicaSet each Pod/Instance produces its own log which. A variety of tool and approaches are available to capture, integrate, filter, and search logs in K8s. This tools usually require installation of dedicate components. CloudManager provides a simple approach to inspect logs, by allowing to easily visualize log within the web browser. The functionaly provided includes:

  • Switch between logs of different pods of the same Deployment/ReplicaSet
  • Filter log messages by text matching
  • Filter log message my level (assumes application use common log level naming conventions, such as ERROR, DEBUG, INFO)
  • Resize of log window size, and easily move scroll area up and down
  • Follow logs tails in (semi-)real-time as they are produced by Pods (i.e. WebSockets are used to implement this).

The above should provide a minimal amount of functionality to get started, which can be combined with more advanced approaches and specialized tooling. Notice that Kubernetes logs have same life-time as the Pods. Thus, Long term storage of logs, requires other approaches and specialized tooling.

Image below shows a snapshot of an example log of a Pod as depicted in the CloudManager UI.

Deployment Details

Editing and Upgrading Deployments

The tab Settings > Options in the details page of a Deployment, Pod, or Service, display a summary of the configuration settings of the resource. For managed Deployments most of of the fields are editable from the UI, with the exception of kind and unique name which are immutable. In particular, if an image (or registry) is updated (e.g. with a more image recent version/tag), the next time the Deployment is started or restarted this image will be used thus replacing the previous image across all replicas Pods.

Image below shows a snapshots of the Deployment edit panel.

Deployment Details

Service Ingress/Routes

In Kubernetes Services can be made accessible via a DNS host.domain name by relying on the mechanism of Ingress. An Ingress specifies the details to make the service accessible including a set of DNS name, and in case of secured https access (SSL/TLS) a certificate. In CloudManager we can manage the set of DNS names a Service is accessible using the Routes tab. Each DNS name in an Ingress is designated as Route.

For managed Deployments, and as a convenience the Routes tab is also available in the Deployment associated with the Service. Notice that the list of routes of a managed Deployment is also stored in CloudManager own data-base. If you update the set of Route using the Service URI, you should run a Sync operation on the Deployment available on the context menu of the Settings > Options tab.

Images below shows snapshots of the UI for managing the set of routes of a Service (left hand-side), the associate Deployment (righ-hand side).

Deployment Details Deployment Details

Deleting/Uninstalling Deployments

A Deployment, Pod, or Service can be delete by using the command Delete in the context menu found in the Settings > Options tab. A confirmation modal pops up for safe guaring against accidental deletes. For managed Deployments, several options are available to delete associated resources, including:

  • Option to delete or not the deployment in the Cluster vs. only in CloudManager database
  • Options to delete the associated Service (with same name)
  • Option to delete Ingress/Routes bound to the associate the Service (same name of the Deployment)

Images below shows snapshots of the context menu and modal to delete a Deployment and associated resources.

Deployment Details

For non-managed Deployment, Pods and Service installed as part of a package solution (e.g. Helm Chart), a preferred approach to delete resource is to perform an uninstall using the package manager. This options is also available in context menu found in Settings > Options tab. Internally, CloudManager use the meta-data (labels and annotations) of the resource to infer which package manager and release name was used.

Images below shows snapshots of the context menu and modal to uninstall a packaged solution.

Deployment Details Deployment Details

Deployment Events

Kubernetes tracks and manages an event history for a Deployment at the cluster level. CloudManager complements this by tracking “high-level” events for managed Deployments. The tab Instances > Events list the events for a Deployment sorted from most recent to least recent. A radio-box allow to switch view between “high-level” events and “cluster-level” events. For non-managed deployments only the K8s cluster-events is available.

Images below shows snapshots an example of a deployment high-level Events (left-hand side), and cluster-level events (right-hand side).

Deployment Events Deployment Events

Hi-level events also produce user notification (via email,SMS, and in app) if user settings specify that such notification should be received.

Table below summaries the high-level events generated by CloudManager. The cluster-level events depend on implementation details of K8s controllers, and are not listed here.

Resource TypeEvent TypeDescription
DeploymentCreatedManaged Deployment created
UpdatedDeployment configuration updated
StartedDeployment started
StoppedDeployment stopped
CrashedDeployment Crashed
RestartedDeployment restarted
Scaled ReplicasReplica/Instance/Pod count changed
Scaled ResourcesMemory/Storage Resources changed
BuildCI/CD Build Started
MountAdded | Removed | UpdatedDeployment Mount added/removed/updated
VariableAdded | Removed | UpdatedDeployment Environment Variable added/removed/updated
RepositoryAdded | Removed | UpdatedDeployment Repository added/removed/updated
RouteAdded | Removed | UpdatedDeployment Route added/removed/updated
BindingAdded | Removed | UpdatedDeployment Binding added/removed/updated
ConnectorAdded | Removed | UpdatedDeployment Connector added/removed/updated

Auto-Environment

For managed Deployments its possible to specify two enumerated properties: category and stack, both descriptive hints to users and tools about the category of application run by the containers and the software stack used in development. The valid category values and intended use-cases are summarized below.

  • Application (AutoEnv) — End-User application with web UI/UX
  • Service (AutoEnv) — Service without without web UI/UX
  • WebServer — HTTP web server
  • Database — Database or Persistence store of any kind (e.g. RDNMS, NoSql, FileStorage, etc.)
  • Message Broker — Messaging middleware (e.g. supporting AMQP, MQTT, STMP, and other protocols)
  • Tool (AutoEnv) — Any devops tool (with or without web UI)
  • Suite/Composite (AutoEnv) — Some combo of the above
  • Other (AutoEnv) — Fallback for other categories to covered above

Images below shows snapshots of the deployment creation form where category Application and stack Suite/Boot are specified.

Deployment Creation Deployment Auto Env

For certain values of category (indicated above with AutoEnv), its possible to request environment variables to be automatically configured for the Deployment. This auto configured variable are merged with the variables defined at the Deployment level.

  • Space Inherit – Inherit common environment variable defined at the Space Level (if any)
  • Cluster Inherit – Inherit common environment variable defined at the Cluster Level (if any)
  • VCAP (CF) – VCAP/Cloud Foundry compatability variables (VCAP_APPLICATION, VCAP_SERVICES)

In case of variable name clashing, environment variable defined at the Deployment specific level have precedence over all other Space and *Cluster level, and variables defined at the Space level have precedence over the ones defined at the Cluster level.

When a Stack is selected, additional environment variables can also be auto-configure whose name and value depend on the stack. The following additional options are availble to fine-tune the setting of the stack specific automatic environment variable:

  • Bindings – Request setting environment variables from the *Binding defined for this Deployment.
  • Skip Unbound – If true, bindings that don’t have a matching Connector are ignored.
  • Connectors – Create connectors that export property specific value to be bound to other Deployments.
  • EnvVars – Configure other stack-specific environment variables, not dependent on Bindings

Table below summarizes the stack-specific environment variables that can be auto-configured by CloudManager at deployment and (re)start time.

VariableOptionSuite/JavaSuite/ Generic BackEndSpring BootJava BackEndGeneric BackEndFront-End/ Static/ OtherDescription
SERVER_PORTEnvVarsYesNoYesNoNoNoServer port
SPRING_DATASOURCE_{URL,USERNAME,PASSWORD}BindingYesNoYesNoNoNoRDBMS DB
SPRING_AMQP_{HOST,PORT,USERNAME,PASSWORD,VIRTUALHOST}BindingYesNoYesNoNoNoMessaging
{SSO,NOTIFICATION,DOCUMENTS,PAYMENTS,SOCIAL}_SERVERBindingYesYesYesNoNoNoService Suite URLs
UI_LINKS_CDNBindingYesNoNoNoYesYesCDN URL

Volume Mounts and Persistence

Kubernetes just like Docker allow containerized applications to mount persistent volumes whose data out-live the containers that are using it. This is an essential feature to support statefull applications and services, like databases, file-stores, reliable&persistent message-brokers, and other. In a nutshell, Persistent Volumes are attached or mounted on a file-system path defined by configuration (as a persistence volume claim). If a container is shutedown and/or replaced by other, the new container will mount the same volume and thus allowing the data to “recovered” out-live the container (assuming configuration is maintained). K8s generalizes this concept further, and allow configuration information from a key in ConfigMap or Secret to me mounted in a file-system path locations as well, as an alternative to provide information to application via environment variables.

CloudManager as full support for persistence volume and volume claims. In the Space details dashboard, tab Persistence displays and allow to manage the list of PersistenceVolumeClaims defined in a Space. In the Deployment details dashboard, the tab Settings > Mounts list all the Mounts defined for a Deployment. For managed Deployments, the UI allows to edit the mount list, including adding new, removed, and editing mounts. When an a Deployment is (re)started the changes are applied, and the application will be able to access the data stored in the Volume. For non-managed Deployment the information is read-only, and fetched fromt he Deployment/Pod specification template.

CloudManager supports all the types of mounts of K8s, namely (Persistent) Volume, ConfigMap, and Secret. Button Add Mount opens a modal to create a new mount point. Once created and saved, a mount can be revised and edited. For type Volume, the user should select between an existing Volume Claim or a new one. For new ones, the underlying K8s cluster should ensure that a persistent volume is automatically allocated when the application (re)starts. For types ConfigMap, and Secret, CloudManager allows for the names of the picked-up from the ones already defined in the Space and valid keys to selected from these (recommended).

Images below shows a snapshot of the UI modal to create/edits Mounts.

Deployment Details Deployment Details

After a (re)start, the Deployment/Pod Meta-data panel can also be used to confirm that the configured volumes were mounted as specified. This is shown as example in images below.

Deployment Details Deployment Details

Table summarized for each type of mount, the fields to configure when creating/editing a mount point. Optional fields are marketed with a +.

Volume/Mount TypeFieldTypeDescription
VolumeNameObjectIDMount a Persistent Volume on specified path
Mount PathPath
Read-Only+Boolean
VolumeClaim+ObjectID
Size+Integer+Unit
Reclaim+Retain|Recycle|Delete
ConfigMap | SecretNameObjectIDMount a ConfigMap or Secret key or set of keys in one or more paths
ConfigMap Name | Secret NameObjectID
Mount Path | ItemsPath | (key,path)*
Optional+Boolean

Pod File-Manager

A very useful feature of CloudManager is the embedded file-system browser and console available. This allow to explore the files stored in Pod, either from ephemeral storage or mounted from a persistent volume. A common use case for the embedded file-browser, is to manually upload files to webserver (e.g. nginx). Most common operations are supported, including:

  • List files and folders/directories
  • Filter file list
  • Navigate across folders
  • Create new folder/directory
  • Download files
  • Upload files from local file-system
  • Upload files form Git repository (for managed Deployment only)
  • Switch between Pods of the same managed Deployment
  • Smart selection of home directory

File-access is implemented by using K8s API to invoice Linux/Unix shell commands, such as ls, mkdir, and tea. So file browsing and operation works only to the extend that these commands are available in the Pod.

The home/starting directory for file browsing depends on the images and Deployment/Pod details. By default and as a fallback, the root / directory of the file-system is set as home dir. For recognized web servers the home folder is the document root for the server (e.g. if nginx image is deteched, the folder selected as folder is /usr/share/nginx/html).

Images below shows a snapshot of the embedded file-system UI, while browsing files in a nginx image.

Deployment Details

Pod Console

In addition to the file-system browser, an interactive console is also available to issue commands on Pod. This is equivalent to run: kubectl exec podname -it -- command, but from the convenience of the integrated CloudManager UI. The features support include:

  • Execution of remote commands in Pod
  • Command history
  • Window resizing and scroll up/down
  • Clear console

Only commands installed in the Pod can be executed from the embedded console.

Image below shows a snapshots of the embedded console while executing the command ls -la.

Deployment Details

Deployment CI/CD

Learning More

Comments and Discussion

Content