vRO Scriptable task to kill idle vCenter sessions

This is a recommended best practice.  Kill vCenter sessions that are older than 24 hours.   This scriptable task will kill all idle sessions that are older than the variable maxidletime.   Cut and paste and go.


vRO add all virtual machines to NSX exception list

Almost everyone is using a brownfield environment to implement NSX.   Switching the DFW firewall to deny all is the safest bet but hard to do with brownfield environments.   Denying all traffic is a bad idea.  Doing a massive application conversion at once into DFW rules is not practical.  One method to solve this issue is to create an exception for all virtual machines then move them out of exception once you have created the correct allow rules for the machine.   I didn’t want to manually via the GUI add all my machines so I explored the API.

How to explore the API for NSX

VMware’s beta developer center provides the easiest way to explore the NSX API.   You can find the NSX section here.  Searching the api for “excep” quickly turned up the following answer:

api

As you can see there are three methods (get, put, delete).  It’s always safe to start with a get as it does not produce changes.   Using postman for chrome.  I was quickly connected to NSX see my setting below:

n1

The return from this get was lots of lines of machines that I had manually added to the exception list.  For example the following

n1

Looking at this virtual machine you can see it’s identified by the objectID which aligns with the put and delete functions the following worked perfectly:

Delete
https://192.168.10.28/api/2.1/app/excludelist/vm-47

Put
https://192.168.10.28/api/2.1/app/excludelist/vm-47

A quick get showed the vm-47 was back on the list.  Now we had one issue the designation and inventory of objectID’s is not a construct of NSX but of vCenter.

The Plan

In order to be successful in my plans I needed to do the following

  • Gather list of all objectID’s from vCenter
  • Put the list one at a time into NSX’s exclude list
  • Have some way to orchestrate it all together

No surprise I turned to vRealize orchestrator.   I wanted to keep it generic rest connections and not use NSX plugins.   So my journey began.

Orchestrator REST for NSX

  • Login to orchestrator
  • Switch to workflow view
  • Expand the library and locate the add rest host workflow
  • Run the workflow

1

2

3

4

  • Hit submit and wait for it to complete
  • You can verify the connect by visiting the administration section and expanding rest connections

Now we need to add a rest operation for addition to the exception list.

  • Locate the Add REST operation workflow
  • Run it
  • Fill out as shown

1

You now have a put method that takes input of {vm-id} before it can run.  In order to test we go back to our Postman and delete vm-47 and do a get to verify it’s gone:

Delete:

https://192.168.10.28/api/2.1/app/excludelist/vm-47

Get:

https://192.168.10.28/api/2.1/app/excludelist

 

It’ is missing from the get.   Now we need to run our REST operation

  • Locate the workflow called: Invoke a REST operation
  • Run it as shown below

1

2

3

Once completed a quick postman get showed me vm-47 is back on the exclude list.   Now I am ready for prime time.

Creation of an Add to Exclude List workflow

I need to create a workflow that just runs the rest operation to add to exclude list.

  • Copy the Invoke a REST operation
  • New workflow should be called AddNSXExclude
  • Edit new workflow
  • Go to inputs and remove all param_xxx except param_0
  • Move everything else but param_0 to Attributes

1

  • Let’s edit the attributes next
  • Click on the value for ther restOperation and set it to “Put on Exclude List ..” operation you created earlier

1

  • Go to the Schema and edit the REST call scriptable task
  • Remove all param_xxx except param_0 from the IN on the scriptable task

1

  • Edit the top line of the scripting to read like this:

var inParamtersValues = [param_0];

  • Close the scriptable task
  • Click on presentation and remove everything but the content question

1

Now we have a new issue.  We need to have it not error when the return code is not 200.  For example if the object is already on the exception list.   We just want everything on the list right away.   So edit your schema to remove everything but the rest call:

1

 

Put it all together with a list of virtual machines

Time for a new workflow with a scriptable task.

  • In the general tab put a single attribute that is an array of string

1

  • Add a scriptable task to the schema
  • Add a foreach element to the schema after the scriptable task
    • Link the foreach look to the AddNSX workflow you made in the last step
    • Link vmid to param_0

1

  • Edit the scriptable task and add the following code:

//get list of all VM’s
vms = System.getModule(“com.vmware.library.vc.vm”).getAllVMs();

var vmid = new Array();

for each (vm in vms)
{

vmid.push(vm.id);

}

 

  • Add an IN for vmid and an OUT for vmid
  • Run it and your complete you can see the response headers in the logs section

Hope it helps you automate some NSX.

What does apply to mean in NSX Firewall?

When I first started using NSX I ran into this little problem.   What does apply to mean and how should I use it?

Background

I believe the background for the apply to is from physical firewalls.   They allowed you to apply rules to a specific interface.   Applying to an interface had the following effects:

  • Limit the number of rules that have to be processed
  • Allow specific fine-grained controls

Applying rules to specific interfaces had a few issues:

  • You had to have a good understanding of the network topology in order apply rules correctly
  • New interfaces may be missed by rules

You also had the ability to apply the rule to all interfaces that existed.   On the surface if you had enough hardware to apply the rules everywhere it worked great.  Tons of interfaces who didn’t need the rules now had them.    There are a few problems:

  • New interfaces would have no rules and all rules would have to be applyed to them
  • These rules exist only on a single firewall rule creation is specific to that firewall

NSX Firewall

The NSX firewall takes a similar approach to firewall application.  All firewall rules are created in NSX manager and stored inside the NSX manager database.   By default rules are applied to the “distributed firewall”.  This will apply the rules to all virtual machines vNIC, regardless of the virtual machines location.   This creates the same problem as applying on every interface, each vNIC will have a long list of rules to attempt to match.

This is where the apply to tag becomes interesting.   In order to explain I’ll use a simple example:

Two virtual machines: 172.16.0.2 on VNI 5000 and 172.16.20.2 on VNI 5002.

My default firewall rule set allows them to communicate without any issues.  Let assume I want to block all traffic between these machines so I create the following rule:

pic1

Source:  172.16.0.2 virtual machine

Destination: 172.16.20.2 virtual machine

Service: Any

Action: Block

Apply to: Distributed firewall (default)

 

Using Traceflow we can identify where it was blocked:

pic2

You can see clearly the default of distributed switch applied the drop action to the source.   This is really great because it limits the traffic on the physical wire.   Since the object is known as a managed object in NSX the rule is enforced as soon as possible.   If you have a physical entity that is not managed by NSX the rule will be applied upon the destination.   This is hard to prove because traceflow cannot provide visibility to physical entities.

What does apply to do?

Simply put it tells NSX where to apply the firewall rule.  Lets examine some of the options for my rule above:

  • Host
  • Cluster
  • Virtual machine
  • IP or Mac set
  • etc..

It provides the full list of objects that DFW rules can made with including dynamic sets and tags.   This is really powerful.   For the sake of this example lets apply the rule to the destination virtual machine instead of the DFW.

pic1

Using traceflow we can see the results:

pic2

My attempted connection was dropped at the destination where I applied the firewall rule.    You can also see how it between 7 and 8 the message left host 3 and went across my physical network to host 1 (black hole of visibility)

Why use the apply to feature?

  • Reduce the amount of rules applied to each vNIC
  • Enforce the rule at a specific location (think situations with VM overlap or rule overlap)

Apply to does add to the complexity of the environment and troubleshooting but can limit scope.   This is where careful planning and understanding of the environment can really help.   Arkin can help as well but that’s another days post.

Greatest tool for NSX!

I want to let you in on a little secret of NSX called Traceflow.   It was made available in the 6.2 release and I am in love with it.   In order to explain my love let’s do a history lesson:

History Lesson (Get off my lawn kids time)

Back in the old days (pretty much right now in every enterprise) you had a bunch of switches, routers and firewalls.   When a server was having a problem communicating with another server you had to trace its MAC address through every hop manually.   You might be lucky and use a SIEM to identify if a firewall was dropping the traffic.   Understanding each hop of the traffic is a pain.    It takes time and can be very complex in enterprise implementations.

Enter NSX

NSX does some complex routing, switching and firewalling.   Your visibility into the process in the past was articles like mine.   With traceflow you can prove your theory and identify data paths.    It still does not have visibility beyond the NSX world and into the physical.   Hopefully some day we will have that too.   Traceflow can get you pretty close.

Where is this traceflow of which you speak?

Login to vCenter, select networking and security and it’s on the right side most of the way down.   It allows you to select a source and a destination then inject packets.   The NSX components report back as the injected packet passes by allowing you to trace the flow of communication.

Show me some meat

Sounds good.  Lets assume we have two virtual machines 172.16.0.2 and 172.16.0.3 both on VNI (think vlan) 5000.   They are on the same ESXi host.   There are no firewall rules blocking traffic.   Here is the output from traceflow:

first_same_host

Look at that.  The injected packet came from 172.16.0.2 and hit the vNIC FW then was forwarded directly to 172.16.0.3’s vNIC firewall and into the machine.   This is simple and exactly what we expect.  Let do the exact same thing except move the second machine to another ESXi host:

second_diff_hosts

Now we have added the VTEP (virtual tunnel end point) connection between ESXi hosts.  VTEP communication is layer 3 between ESXi hosts creating a stretch of VNI 5000 between distances or right next to each other.

Neat meat but it really only shows layer 2 communication that’s easy

How about some routing then.  Two virtual machines 172.16.0.2 VNI 5000 and 172.16.10.2 VNI 5001.   Each on the same ESXi host:

Third_usingtwo_networks

Look at that now we see the logical router in the mix taking the traffic from Logical switch (LS-172.16.0) and routing it to Logical router LS-172.16.10.   Suddenly the flow of traffic is not a mystery.

What about if the firewall is blocking the traffic?

I assumed you would ask so here is a new firewall rule I added:

rule

And the traceflow:

after_fw_rule_added_1

Yep my packet was dropped and it tells me where and what rule number blocked it.

What is the only problem with traceflow?

That is does not show the traffic flow on my physical network.   This should be very simple given that all my traffic for NSX is routed we should not have complex layer 2 stretches or lots of vlans to ensure are in place.   It’s just routed communication that can start at top of rack with the correct design.

Why I took a pay cut to work at VMware

Warning:  This is a love rant for VMware.  You might want to skip if you are looking for the normal technical details of my blog.

Great title eh?  Really catchy and intended to get you to read and it’s true.    Two months ago I left a great job with IBM to work with VMware and took a pay cut to do it.   When you switch jobs there are lots of reasons money is cut part of the deal, you might leave a job for the following reasons:

  • Too much travel
  • Bad situation with management
  • No career growth potential
  • Money
  • A new challenge
  • etc…

The reasons are often a combination of these and other factors.   I left my job as a Senior VMware Architect for IBM to work as a Solutions Architect for VMware.   I wanted to take this blog post to explain why I took the job.   Some years ago I was a happy yet bored systems administrator.   My boss suggested that I attend an industry conference as a perk.   I went to VMworld.   I came back from the conference really excited about the future of VMware and the cloud.    I was invigorated by the energy and the vision of VMware’s executives.   I continued to learn about their technology and found it very refreshing in the market.   This lead me to focus my career away from Linux and into VMware technology.

Culture

I have taken two runs at VMware jobs in the past and did not make it.   Each time I was interviewed by multiple people.   Each of those people took the interview time to teach me new skills and help shape my thinking for success.   I love that attitude.   It’s a simple attitude that we are stronger as team instead of individuals.    Most of the company has this attitude and I love it.

Innovation

VMware’s technology continued to push the limits of traditional datacenter while proving real business value.    They have proven to be innovative through aquisitions (NSX) and internal research and development (vSAN).   I love this approach too many companies stop internal research and loose the innovative spirit, this is simply not true at VMware.

Career Growth

VMware takes career growth serious.  Two weeks after starting with the company my boss asked me for specific goals that can effect my bonus structure.   These goals are recorded and tracked and VMware is serious about enabling me to meet these goals.   Managers seem to be interested in retaining talent by supporting growth and interests.

New Challenges

Yes it’s true I am a challenge junkie.  It’s what has caused me to get two VCDX’s in two years.  Once a goal is on paper I am a nut case to achieve it.   Working for VMware as a Solutions Architect represents some new challenges and lots of learning which does keep me going for a little while.

It’s all great when you are new

I completely agree I am very young in the company so my view is narrow.   I believe the future to be very good for VMware and I am excited to join them on the journey.

Storage in Virtualization is it a real problem?

As a VMUG leader and a double VCDX I have seen one technology trend only increase over the years.  It’s the number of storage vendors!   Last year at our VMUG UserCon every sponsor looking for a presentation slot was a storage vendor.   We had to choose between storage vendors and other storage vendors I would have killed for another type of vendor.  In past years we had presentations from Backup vendors, management tools, monitoring tools and IT service companies.   Now it’s all storage companies.   As a double VCDX I get contacted by start-up companies looking to sell their products to VMware customers.  Some are well known company’s others are still in stealth but they all have the same request… how do we get VMware guys to buy our awesome technology.   Almost all of these companies are using Super Micro white box solution with some secret sauce.  The sauce is what makes them different, some are web-scale while others are all flash or awesome dedupe ratios.   All attempting to address some segment of storage problems.  It really begs to question is there a storage problem?

 

What does storage provide?

Storage essentially provides two things that virtualization professionals care about:

  • Capacity (Space to store information)
  • Performance (divided into IOPS and latency)
    • IOPS – input/output per second number of commands you can shovel into the system
    • Latency – how long it takes to shovel each IOP end to end

There are subsections of software that each vendor provides in order to improve these metrics for example dedupe for capacity or Hot blocking for performance.   Essentially this is the role of storage systems to provide these functions.

How has virtualization made it worse?

Virtualization has made management of these metrics a challenge.   In traditional storage a single entity controls a LUN or mount.  It runs an application that has certain predictable patterns for usage of the lun.   For example, a web server does a lot of reads and a few writes.   We can identify and classify this usage pattern and thus “right size” the lun to meet these needs.  This right sizing can take the form of both capacity and performance metrics.   Virtualization created a new pattern lots of guest servers with different applications sharing the same lun.   This makes the usage metrics pretty wild.   The storage system has not idea what the virtual machines are doing beyond a bulk understanding of reads and writes.   This seems like a problem but in reality the storage system just see’s reads and write and does not care, unless capacity or performance for that lun are exhausted.    This issue might drive the acquisition of more performance storage in order to meet the needs of our new “super luns” but in most cases it just takes advantage of unused capacity on a storage array.

What does desktop virtualization have to do with storage?

Desktop virtualization taught us a very important lesson about storage.  During boot operating systems do a lot of IOPS.   Operating systems are 90% idle except during a boot.  During boot lots of reads and some writes happen putting pressure on disk.   Desktop virtualization introduced a new pattern of pressure on disk.  At eight and nine AM everyone would boot up their virtualized desktop (spawning new desktops and booting the OS’s) putting massive pressure on storage.   The caused storage systems to fail and if shared with traditional server virtualization everything failed.   Traditional storage vendor’s solution to this problem was buy a bigger array with more cache and capacity.  This created stranded capacity and was a huge CapEx expenditure when desktop virtualization was “supposed” to save us money.

Role of Cache

The rise of SSD has provided a dramatic improvement to the size of cache available in arrays.   Cache provide ultra-fast disk for initial write and common reads thus reducing latency and improving IOPS.   I remember the days when 1GB of cache was awesome these days’ arrays can have 800GB cache solutions or more.   Cache allows you to buy larger and slower capacity disks while getting better performance to the virtualized application.    Cache is a critical component in today’s storage solutions.

How to solve desktop virtualization

Vendors saw a gap in technology with desktop virtualization not being filled with traditional array vendors.    This gap can be defined as:

  • The array was not meeting my performance needs without buying more arrays
  • I need to separate my IOPS for desktop virtualization away from servers

 

This gave rise to two solutions:

  • Hyper-converged infrastructure
  • All Flash arrays

 

 

Hyper-converged

Hyper-converged infrastructure has many different definitions depending on who you ask.  For the purpose of this article it’s a combination of x86 hardware with local hard drives.  This combination provides the compute and software based clustered storage solution for virtualization.    The local hard drives on each compute node contribute to the required cluster file system.  This model has long been used by large service providers like Google and Amazon.  These are normally implemented for ESXi over NFS.  The market leader at this time is Nutanix who really cut their teeth solving desktop virtualization problems.  They have since moved successfully into traditional server virtualization.   Their success has encouraged other vendors to enter the market to compete including Simplivity (OmniCube) and VMware (Virtual SAN).   Each vendor has some mix of the secret sauce to address a perceived problem.   It’s beyond the scope of this article to compare these solutions but they all take advantage of at least one SSD drive as a per compute cache.   This local cache can be very large compared to traditional arrays with some solution using 1TB or more local cache.   Each compute node serves as a storage controller allowing for a scale up approach to capacity and performance.  Hyper converged solution have seen huge growth in the market and does effectively resolve the desktop problem depending on scale.  Hyper converged solutions do introduce a new problem; balanced scalability.   Simply put I may need additional storage without needing more controllers or compute capacity, but in order to get more storage I have to buy more nodes.   This balanced scale issue is addressed by vendors providing different mixes of storage / compute nodes.

 

All Flash Arrays

With the rise of SSD the cost keeps getting lower.   So traditional array vendors starting producing all flash arrays.   Flash provided insane amounts of IOPS per disk, but lower capacity.  Each month the capacity increases and the cost reduces on SSD making the All flash array (AFA) a very real cost effective solution.   Years ago I was asked to demo a newly emerging Flash solution called RamSAN.  The initial implementation was 150,000 IOPS in a single 2 U unit.   I was tasked with testing its limits.  I wanted to avoid artificial testing so I threw a lot of VMware database workloads at the array (all test of course).   I quickly found out that the solution may be able to do 150,000 IOPS but that my HBA’s (2 per host) did not have enough queue depth to fulfill the 150,000 IOPS.   All flash arrays introduced some new problems:

  • Performance bottleneck moved from the disk to the controller on the array
  • Capacity was costly
  • New bottlenecks like queue depth could be an issue

I remember buying 40TB’s of ssd in more recent array.  The SSD drives combined was capable of 300K IOPS while the controllers could not push more than 120K IOPS.    A single controller was able to do 60K IOPS.   Quickly the controller became my problem, one that I could not overcome beyond buying a new array with additional controllers.    Traditional array vendors struggled with this setup bound by their controller architecture.  A number of startup vendors entered the market with scale up controllers.  All flash based solution can potentially solve the desktop problem but at a steep cost.

 

Problem with both solutions

All solutions suffer from the same problems:

  • Stranded capacity in IOPS or storage capacity (more of either than you need)
  • Storage controllers cannot meet performance needs

All of these issues happen because of a lack of understanding of the true application metrics.   vCenter understands the application metrics the array understands reads and writes at a lun level.   This lack of understanding of each virtual machine as an independent element does not allow the administrator to increase priority or preference of individual machines.  Hyper converged have two additional challenges:

  • Increased network bandwidth for data replication (assuming Fiber arrays NAS have this issue)
  • Blades rarely have enough space for multiple hard drives

The value proposition for hyper converged is that you can replace your costly array with just compute with hard drives.  This is a real cost savings but only if you are due for a technology refresh on both compute and storage and your budgets are aligned and agreed to spend on hyper converged.  Getting storage to give up funds for a compute hard drive can be a hard proposition.

 

How to understand the smallest atomic unit

Lots of vendors understand this problem and have different ways of approaching this problem including:

  • VVols
  • Local compute cache
  • NFS

Essentially to understand the small you have to understand the individual files and how they are connected.   VMFS file system handles all this information, block based arrays only understand block based reads and write.   Individual files are invisible to the block based file system.

 

VVols

Developed by VMware VVol’s provide a translation method between block based storage systems using protocol endpoints.  These protocol endpoints run on the storage controllers or in-line with controllers to allow the array to understand the file system and individual files.   This translation allows the array to act upon a single virtual machine on a lun instead of running on the whole lun.   We can apply performance, snapshots and all array operations on the individual virtual machines.   This is a great solution but has two problems:

  • The protocol endpoints much like controllers have scalability issues if not implemented correctly
  • Vendor adoption has been very slow

 

Local compute cache

This process adds SSD or RAM and creates a cache for virtual machine reads and writes.  This cache can be assigned to individual machines or shared between the whole compute node.  This method has an understanding of individual machines and accelerate reads and writes.   In order to cache writes it’s critical that the writes be redundant so normally the writes have to be committed to at least two different compute nodes cache before acknowledged to the operating system.  This ensures that the data is protected during a single compute node failure.   The current leader providing read and write cache solutions like this is PernixData.  This process ensures local performance enhancement at the lowest atomic level but does endure some common challenges with hyper converged including:

  • Every compute node must have local SSD to accelerate solution
  • Network bandwidth for replication is used (meaning your need more 10GB or you have to share it)

NFS

NFS has been around for years.  It’s a method for sharing a file system to Linux and Unix hosts.   VMware supports it natively and it’s the only supported file system (other than VMware VSan) that is not running VMFS.  VM’s on NFS are files on the NFS file system.  This allows the storage array / server full understanding of the individual files.   This exposure can be a huge advantage when looking at backup products and site to site replication.    Until NFS version 4 support (vSphere 6) there were a number of draw backs to NFS including multipathing.  They have been removed and NFS provides the full object based storage solution that VVols promise.   Scalability can be a problem with a maximum number of virtual machines and objects on a single lun, or with capacity around controllers.   NFS based solution are network based and thus create network workload.  In addition natively NFS does not provide any performance by file enhancement method it just deals with IO in and out.   Lots of vendors have implemented solutions to enhance NFS.

What is best and does it solve the issue?

I started this post with the question is there a problem with storage… well lots of vendors seem to think so and want to sell us stuff to solve the issue.   I suggest that from my experience we have a few issues:

  • Backup is a major mess, in vSphere it’s hard to manage and keep working without constant care and feeding
  • Storage arrays don’t have any understanding of the lowest atomic unit and thus cannot protect us from bad neighbors on the same lun, this becomes more of an issue in large hosting environments.
  • Performance (IOPS) is rarely the issue except in specific use cases or small business thanks to oversized arrays
  • Queue Depth is rarely the problem except in specific use cases
  • Capacity seems to be the buzz problem and the price per year just keeps getting lower

Backup

I believe we need to get to object based storage so we can solve the backup problem.  Doing VDP backups or lun snapshots does not allow management at the lowest atomic unit.  The current model causes crashes and outages and struggles to work well.  It’s not a product issue it’s an implementation and technology issue that needs a dramatic change to resolve.

Local knowledge at the lowest level

The object I manage is a virtual machine.  My storage array friend manages a lun with multiple virtual machines (sometimes 100’s – yes I am looking at your NFS).  Until we manage at the same atomic level we will have problems aligning policies and performance.   I think policy based enforcement with shares is a great way to go… something like SIOC that is enforced by the array.    Hot blocking, all flash etc… are all fixes to attempt to get around the essential communication issue between arrays.   Future storage cannot be bound by two storage controllers it needs to scale to meet needs.   The hyper converged folks have a big advantage on this problem.    Future of storage is not block, except in mixed enterprise environments (I am looking at you mainframe).   You need to get comfortable with network based storage and architect for it.   Buy switches and interfaces on your compute just for storage traffic don’t mix it.  Architect a super highway to your storage that is separate from your normal network traffic.

Performance

If performance is your issue, then solve it locally don’t buy another array.  Local cache will save you a lot.   Scale up solutions in arrays or hyper converged are both options but local SSD will be a lot cheaper than a rip and replace.  It’s easier on management cost.

What should I choose?

It depends on your needs.   If I was presented with a green field that is going to be running all virtualized workloads today I would seriously consider hyper converged.  Storage arrays are more mature but move a lot slower on updates.  I would move toward a more software defined solution instead of hardware installed.   I think that central understanding of the lowest atomic unit is critical going forward.   If you have a mixed storage environment or an investment in fiber channel large arrays with cache makes sense.   If you are looking for solve VDI issues I would consider hyper converged or lots of cache.   The future is going to hold some interesting times.  I need storage to provide the following:

  • No controller lock in I need it to scale to meet my needs
  • It needs to understand the virtual machine individual identity
  • It should include backup and restore capabilities to the VM level
  • It has to include data at rest encryption (yes I didn’t mention this but it’s huge)
  • Policy based performance (allocate shares, limits and reservations)
  • Include methods to move the data between multiple providers (move in and out of cloud)

Does it sound like a unicorn… yep it is… Someone go invent it and sell it to me.

 

vRO Scriptable tasks

An old friend contacted me today with some vRO questions.  He is struggling with learning vRO like so many administrators before him.  It’s not easy to learn vRO it’s a very different type of programming.  Once you learn the basics of the editor and especially JavaScript in scriptable tasks it becomes really powerful.  So a few tips that I provided to my friend that may help you out when learning vRO:

  • JavaScript is case sensitive – this is hard for PowerShell or Windows users to remember
  • Variables are local to your scriptable task unless you make them an input or output (or both)
  • JavaScript variables don’t have defined types during creation (they are just pointers to memory locations)
  • Errors from JavaScript are not always very helpful (they point you to the incorrect or offending line)
  • You really should validate your input before taking an action

 

Case Sensitivity

I have rewritten a whole script to find out I missed a case for example a common one would be that is valid is:

While:

Will all fail with a odd error.  You have to watch case on reserved commands.  This is true of API explorer objects VcVirtualMachine is VcVirtualMachine not VCvirtualmachine etc…

Types

Javascript has lots of different variable types including user defined objects.   You can discover the object type using System.log.  Assume that my object / variable is called new.

System.log(new);

Will output the type of object into the log window when run.  This is very useful for discovering the object type.  In the vCenter API you might have something like

If you want to understand your object type just use:

And it will tell you the object.  (this is a example and will not work)

Almost everything in vRO is a complex object which can be loosely defined as a combination of key value pairs in an array.   A traditional array have multiple objects referenced like this:

This is a array of strings with a length of 3.   If I called this:

The log would display “magic”

An object is an array with multiple key value pairs for each array element for example

This allows me to store multiple elements that are connected and entries in the object don’t have to be the same type (e.g. firstname and lastname are both strings but I could add a age entry to hold a number).  This provides a huge flexibility… those familiar with powercli this is the same as:

This will display all vm’s with a full list of the elements on the object of vm.

Error Handling

Errors will display the line number.  Use that to determine the cause or source.   I know almost every programmer needs to have the ability to test their assumptions around variables when writing programs use the System.log for this:

System log is readable only by someone in vRO and not exposed to customers (unless they are running it from vRO)

Input validation

First check for null

Then validate your object type using the System.log method above.  There is a way for checking for complex system defined objects like this:

if (vm instanceof VcVirtualMachine)

{

//Do some action

}

This can be very useful when working with objects validate before you act.

 

Hope it helps