Dashboard 101 how to create a simple dashboard and apply interactions.
Two pod VVD design with an underlay built on Cisco ACI and using OSPF adjacencies with a single area with BGP providing adjacencies for the overlay provided by NSX.
Problem description: Inconsistent routing of traffic on the overlay.
The root of the Problem:
Edge-1 was publishing routes to ACI fabric to allowing underlay to route traffic to overlay networks. ACI was then publishing those routes to Edge-2 with higher priority than what the DLR was publishing the routes to Edge-2 as this causing traffic on Edge-2 could not forward to the DLR connected to Edge-2.
Resolution: Was to change the design from uplinks on multiple edges to uplink on single EDGE with HA enabled.
Edge Configuration: Uplink to vDS and Internal connection to Global Transport Network
OSPF routing on EDGE: Uplink1 will connect to ToR for OSPF traffic.
BGP routing on EDGE: Configure Neighbors.
Route Distribution on EDGE: Needs two distributions from BGP to OSPF and accept all BGP.
DLR configuration: Uplink to EDGE-01 with three local virtual switches.
OSPF Configuration on DLR: Disable
BGP Configuration on the DLR: BGP connection to the Edge.
Route redistribution: Only BGP connected needed.
Final configuration architecture:
Going to take you through brownfield setup of LCM, it starts off with information on how to use LCM please note that OVA configuration centered on greenfield deployment. OVA configurations are outside the scope of this document. Please note though in Greenfield you will want to deploy vIDM and then LCM before the rest of the vRealize Suite.
Configuration Starts from this point
Click on the plus icon to create a new environment.
We will choose to use the installation wizard, at the end we will save the configuration file in JSON format that you can use for doing the install by the configuration file.
Ok let’s create and import a Data Center we will cover building a greenfield is outside scope of this document. Click on plus ICON to add new Data Center.
You now add the manage Data Center screen. Please click on the button in the right-hand corner.
Click Add Data Center.
Give your Data Center a Name and choose the proper location for that Data Center.
Here you can see that you can manage data centers around the world and manage them from a single pane of glass.
Summary tab will allow you delete Data Centers and shows a high-level summary.
Once you have added a Data Center, you can add vCenter Servers to a Data Center.
You will have the ability now to choose what type of vCenter server that is added to the Data Center. Either Management, Workload or Consolidated.
Hit Summit and verify that the operation is complete.
Now go back to the home button and let’s create our environment. Choose Data Center Name, Choose Environment Type and Name the Environment. Create administrator and default password.
We did not select vRealize Automation and choose to import vRealize Business for Cloud, vRealize Log insight and vRealize Operations.
Read and agree to the EULA
Press next and notice the product choices show up as icons in the menu ribbon.
Add License Key
Press next and fill out the infrastructure tab on where is the LCM server is located.
Press Next; fill in information about the network LCM is on, Gateway, DNS, Netmask, ETC.
You will need certificate even if it’s self-generated! You can save the progress and go to settings to create a certificate.
Now add in the certificate and press next.
Now you need to point LCM to the products you want it to manage. Please fill out each product with correct information. Take note that it will ask for IP for vRealize Operations and the master node FDQN for vRealize Log Insight.
Time to Validate!
Download of the configuration will result in JSON File. Then click Summit, and you have completed the configuration of LCM.
Upcoming Blogs will cover OVA configuration, Green Field configuration and how to within LCM now that it is configured.
It is critical that VVD project is greenfield deployment with spine leaf network topology. Where VVD can be accomplished on a traditional three-tier network, it might not fit within the preferred prescribed nature of a VVD. The preferred routing protocol is BGP, VVD can support OSPF configurations with some little extra effort. Where if there is no dynamic routing protocol an install of VVD will need a custom design for NSX to fit your network. Understanding firewall topology between different network zones is critical.
Failure to start with the prescribed Greenfield with Spine Leaf and BGP with good firewall documentation can add time and cost to a VVD project.
Please see https://kb.vmware.com/s/article/2079386 where Port 4789 is a requirement for the data center to data center traffic via VTEP protocol.
Please see https://communities.vmware.com/docs/DOC-34307 for all ports if there are firewalls between network segments in one datacenter. Along with an understanding of what ports might need to be open for users to access management products.
Prerequisites like AD Service Accounts, FQDN for all servers with forward and reverse DNS are working is paramount. SSL certificates need to be generated before the start of the project. Having hardware that is same per cluster is VMware best practice and is necessary for successful VVD deployment.
For a list of prerequisites and guide to prep for VVD engagement, I assume that VLAN’s, IP’s and FQDN’s will follow your corporate procedures.
Yes, the prerequisites seem to be a daunting task but many of them are core services for any data center. The vast majority of this prep work will cut down on process and change approval time and allow you to focus on the task of deployment laid out in the documentation for VMware validated designs.