Clustering -
Deployment
How do I access UI
after scaling out the cluster?
The UI access is restricted
from Platform1 only.
What is Platform1 and
why do I need to remember this node?
The platform node from which
cluster creation process is initiated is treated as
Platform1
. The UI
should be accessed only from this node out of the n nodes in cluster.
How is data retrieved
from the other nodes in a cluster if the UI access is restricted to
platform1?
The data of the datacenter is
distributed across all nodes in a cluster. And when the UI layer requests data
on platform1, the platform1 node gets the data stored on all nodes and sends a
response to the UI.
Can I use a platform
node which is deployed in different data center for creating clusters?
All nodes in a cluster
exchange data between them. So, to avoid latency issues, it is recommended to
use the platform nodes deployed in the same data center to create a cluster
What happens to data
on existing platform when I scale out the platform node?
The data on an existing
platform node is preserved and distributed across all nodes in a cluster.
Does the number of collector VMs matter in
determining how many platform bricks I need?
No. Only the total number of VMs across all
.
VMware
vCenter
s and status of the flows
(enabled or deactivated) have impact on number of bricks needed. Refer to the brick
model table in the VMware Aria
Operations for Networks
Installation
Guide Does the number of VMware
vCenters or the number of physical devices (like routers)
or any other type of data sources have impact on the number of platform bricks I
need?
VMware
vCenter
s or the number of physical devices (like routers)
or any other type of data sources have impact on the number of platform bricks I
need? No. Only the total number of VMs across all
.
VMware
vCenter
s and status of the flows
(enabled or deactivated) have impact on the number of bricks needed. Refer to the
brick model table in the VMware Aria
Operations for Networks
Installation Guide Does VMware Aria
Operations for Networks support platform cluster distributed
across 2 data centers for HA reasons?
VMware Aria
Operations for Networks
support platform cluster distributed
across 2 data centers for HA reasons? No. The platform Cluster doesn't support splitting
across data centers. All Platform Cluster VMs should be in same site. Platform
Cluster doesn't support HA today. It is on the roadmap. The customers can use SRM
for HA against DR across 2 sites.
Does VMware Aria
Operations for Networks support single VMware
vCenter with more than 6000 VMs and flows enabled?
VMware Aria
Operations for Networks
support single VMware
vCenter
with more than 6000 VMs and flows enabled? Up to Release 3.5, the
VMware Aria
Operations for Networks
collectors don't support
collecting data from a single large VMware
vCenter
with more than 6000 VMs with flows. This is on the
roadmap. How much disk space
is needed on Platform1?
Platform1 requires more disk
space compared to other nodes in cluster as some of the configuration data is
stored on Platform1 only.
What happens if any
of the node ran out of disk space?
The UI starts showing error messages when disk space
on any particular platform node reaches a certain threshold. Add more disk space to
the platform node by logging in to
VMware
vCenter
. How many times data
is replicated in cluster?
The data replication
mechanism depends on the components present in the platform node.
How do the clusters
work?
- All proxies in a deployment connect to one platform (Platform1). The connectivity between platform and collector is through https on port 443. So only port 443 is visible to proxies from Platform1.
- Upon receiving the requests from collector, Platform1 node load balances requests to other platform nodes in cluster in round robin fashion.
- The platform node normalizes the data and put them in messaging queue for processing by computation engine.
- The computation engine distributes the data across all nodes in cluster by using data replication mechanism. That way there won't be any data loss if any of the node (except Platform1) goes down in cluster.
- Some of the configuration data is stored explicitly on Platform1 node that is not replicated. That's the reason why the high availability solution is not supported.