This topic gives you recommended use cases for VMware Tanzu for Valkey on Cloud Foundry and information for determining the product’s fit for your enterprise’s use case.
Recommended Use Cases
On-demand plans are configured by default for cache use cases but can also be used as a datastore.
Shared-VM plans are designed for datastore use cases in testing or development environments.
The shared-VM service should only be used for development and testing. Do not use for production.
Valkey can be used in many different ways, including:
- Key/value store: For strings and more complex data structures including Hashes, Lists, Sets, and Sorted Sets
- Session cache: Persistence enabled preservation of state
- Full page cache: Persistence enabled preservation of state
- Database cache: Middle-tier database caching to speed up common queries
- Data ingestion: Because Valkey is in memory, it can ingest data very quickly
- Message queues: List and set operations.
PUSH
,POP
, and blocking queue commands. - Leaderboards and counting: Increments and decrements sets and sorted sets using
ZRANGE
,ZADD
,ZREVRANGE
,ZRANK
,INCRBY
, andGETSET
- Pub/Sub: Built in publish and subscribe operations:
PUBLISH
,SUBSCRIBE
, andUNSUBSCRIBE
Service Offerings
For descriptions of the service offerings for Tanzu for Valkey on Cloud Foundry, see:
Enterprise readiness checklist
Review the following table to determine if Tanzu for Valkey on Cloud Foundry has the features needed to support your enterprise.
Resilience | More Information | |
---|---|---|
Availability | All service offerings of Tanzu for Valkey on Cloud Foundry are single VMs without clustering capabilities.
This means that planned maintenance jobs (e.g., upgrades) can result in 2–10 minutes of downtime,
depending on the nature of the upgrade.
Unplanned downtime (e.g., VM failure) also affects the Valkey service. Tanzu for Valkey on Cloud Foundry has been used successfully in enterprise-ready apps that can tolerate downtime. Pre-existing data is not lost during downtime with the default persistence configuration. Successful apps include those where the downtime is passively handled or where the app handles failover logic. |
Recommended Use Cases Support for Multiple AZs |
Failure Recovery |
Recovery from VM failures and process failures are provided for by:
|
Configuring Automated Service Backups BOSH Backup and Restore (BBR) for On-Demand VMware Tanzu for Valkey on Cloud Foundry Manually Backing up and Restoring Redis for Pivotal Cloud Foundry |
Isolation | Isolation is provided when using the on-demand service. Individual apps and workflows should have their own Tanzu for Valkey on Cloud Foundry instance to maximize isolation. | |
Day 2 Operations | More Information | |
Resource Planning | Operators can configure the number of VMs and the size of those VMs. For the on-demand service, the operator does this by creating plans with specific VM sizes and quotas for each plan. For the shared-VM service, the number and size of VMs are pre-provisioned by the operator. BOSH errands used for registration, upgrade and cleanup use short-lived VMs that cannot be configured but can be turned on or off. |
On-Demand resource planning Shared-VM plan |
Health Monitoring | Both the on-demand and shared service instances emit metrics. These include Valkey-specific metrics and Tanzu for Valkey on Cloud Foundry metrics. Guidance on critical metrics and alerting levels is captured with the Tanzu for Valkey on Cloud Foundry Key Performance Indicators (KPIs). |
Key performance indicators |
Scalability | For the on-demand service, the operator can configure three plans with different resource sizes. The operator can also scale up the VM size associated with the plan. Additionally, the operator can increase the quota, which caps the number of instances allowed for each on-demand plan. To prevent data loss, only scaling up is supported. For the shared-VM service, the operators can change the Valkey instance memory limit as well as change the instance limit. To prevent data loss, only scaling up is supported. |
Scaling the On-Demand Service |
Logging | All Valkey services emit logs. Operators can configure syslog forwarding to a remote destination. This enables viewing logs from every VM in the Tanzu for Valkey on Cloud Foundry deployment in one place, effective troubleshooting when logs are lost on the source VM, and setting up alerts for important error logs to monitor the deployment. | Configuring syslog forwarding |
Customization | The on-demand service can be configured to best fit the needs of a specific app. The shared-VM service cannot be customized. | Configuring the On-Demand service |
Upgrades | For information about preparing an upgrade and about understanding the effects on your Tanzu for Valkey on Cloud Foundry and other services, see Upgrading Tanzu for Valkey on Cloud Foundry. Tanzu for Valkey on Cloud Foundry upgrades run a post deployment BOSH errand called smoke tests to validate the success of the upgrade. |
Upgrades Smoke Tests |
Encryption | More Information | |
Encrypted Communication in Transit |
You can enable TLS encryption between apps and service instances. Additionally, Tanzu for Valkey on Cloud Foundry has been tested with the IPsec Add-on for PCF. |
OS Valkey security TLS in Tanzu for Valkey on Cloud Foundry Securing data in transit with the IPsec add-on |
Availability Zones
On-demand Tanzu for Valkey on Cloud Foundry supports configuring multiple availability zones (AZs) to improve resiliency. However, assigning multiple AZs to Valkey service instances does not provide high availability. This is because each individual Valkey service instance is a single VM without clustering capabilities.
The following diagram shows a Valkey deployment configured with three availability zones.
Click here to view a larger version of this image
Service instance VMs are placed in availability zones as follows:
- For on-demand plans: Service instances can be configured to deploy to any AZ. If you select multiple AZs, service instances are distributed randomly between them. This improves resiliency.
- For the shared-VM plan: Service instances run on a single VM in the AZ in which the tile is deployed.
Content feedback and comments