Category Archives: Automation

About Cloud Foundry Service Brokers

Cloud Foundry offers consumers of the platform all kinds of backing services. Think of services like Mysql, Redis and RabbitMQ. Those services are offered to consumers through the Cloud Foundry marketplace.

To be able to create instances of the services in the marketplace and then bind them to an application, Cloud Foundry uses Service Brokers. A Service Broker implements the Cloud Foundry Open Service Broker API and takes care of provisioning of services. It also provides credentials to a service so an application can connect to the created service instance. The CF service broker API is a REST API specification. You can implement this API any way you like. Most service brokers seem to be written in either Golang or Ruby but it doesn’t realy matter in which language you implement the API. It doesn’t matter where and how you run it either. As long as the broker is reachable for the Cloud Controller Cloud Foundry will be able to consume it

In conversations with customers I noticed some misconceptions around service brokers so in this post I want to shed some light on what a service broker is and what it’s not.

Let me start out by listing what a Service Broker is NOT:

  • A Service Broker is not a reverse proxy of some kind
  • A service Broker is not a connector
  • A Service Broker is not a service in and of itself

So what does a service broker do?  Let’s walk through how a Cloud Foundry platform user would consume a Mysql database and map that to service brokers operations:

  • User lists content of the marketplace: cf marketplace
    • Cloud foundry will list all the services that are offered by registered service brokers
  • User creates a Mysql service instance: cf create-service mysql 100mb mydatabase
    • This command tells Cloud Foundry the user wants to consume to 100mb plan of the mysql service. The service will be referenced as “mydatabase” within Cloud Foundry. This won’t be the actual database name.
    • Cloud Foundry will call the “provision” API resource on the service broker that offers the mysql service
    • The service broker will now create a new database instance for the user and respond with an http 201 status back to Cloud Foundry
    • Cloud Foundry will save a reference to the service instance
  • Now the user wants to consume the database. He can do so by binding the created service the his application: cf bind-service myapplication mydatabase
    • Cloud Foundry will now do a call to the bind resource of the Mysql service broker API.
    • The broker will create a user for the mysql database and send a response to Cloud Foundry containing the connection details (URI, Username, Password) for the database server (not for the broker but the DB server itself).
    • Cloud Foundry takes the response and populates the VCAP_SERVICES environment variable for the application. This environment variable contains a JSON string with all the information of all the services bound to the app.
    • The app itself is responsible for parsing the json, getting the connection details and connecting to the database. From now the broker is no longer in the loop.

In summary: The broker presents services to Cloud Foundry, CF can request service plans from the broker and request connection details for created services. After that point the broker is out of the loop. It brokered the connection, now the application is directly connected to the service.

When a user no longer needs the service he can issue the cf unbind  command. This will remove the information from VCAP_SERVICES and tells the broker to initiate the unbind task. What exactly happens then depends on the broker but in the case of MYsql it will delete the user it created during the bind operation. After the unbind you can also issue a cf delete-service  command. This tells the broker to get rid of the service. In the case of Mysql it will delete the whole database.

In another post I will go into more detail on how to build you own broker.

Beyond automated deployment

I have been involved in quite a lot of automation projects over the last five years. All of them centered around VMware vRealize Automation and vRealize Orchestrator. During these projects customers throw all kinds of challenges at me. Most of which I can solve. Over the years however I found two challenges that go beyond automated deployment which I can’t really solve using vRA/vRO:

  1. If you update a vSphere template, how do you make sure all machines deployed from that template are also updated?
  2. If you change a blueprint, how do you make sure those changes are also made to existing deployments from that blueprint?

The answer two both really is: you can’t. Not If you’re using vRA/vRO. Dont’ get me wrong. I’m not trying to bash these products here. It’s just a result of how these products are designed and how they work.

In my opinion both problems boil down to the fact that in vRA blueprints you define the initial state of a deployment, not the desired state. So if you deploy a blueprint you get whatever was specified in that blueprint. Which is fine initially. But if you change the blueprint or update the template, nothing will be changed on the existing deployments. The other way around is true as well: If you change/damage your deployment, vRA won’t come in and fix it for you.

Now this seems obvious and not a big problem. After all: getting deployment times down from weeks to minutes using automation tools is a pretty good improvement in its own right. But if you think about it for a minute you’ll realize that when you have automated deployment, now you need to spent the rest of your days automating day 2 operations. After all the tool isn’t doing it for you.

For example you’ll have to introduce a tool which manages patches and updates on existing deployments. You also need to figure out a way to keep your template up-to-date, preferable automated. And if somebody breaks his deployment you need to spent time fixing it.

Now, if you’ve been following my blog recently you probably already guessed the solution to this problem: BOSH :). Here are four reason why BOSH makes your life as a platform operator easier:

  1. In BOSH a template is called a stemcell and stemcells are versioned. You don’t have to make you own, up-to-date versions of CentOS and Ubuntu stemcells are available online at bosh.io.
  2. When you’re using BOSH, software is installed on stemcells by using BOSH releases. Which are versioned, available online and actively maintained.
  3. A BOSH deployment defines a desired state. So if a VM disappears BOSH will just re-create it, re-install the software and attach the persistent disk. Also, when you update the deployment manifest to use a newer stemcell version, BOSH will just swap out the current OS disk with the new one in a few seconds and everything will still work afterwards.
  4.  All these parts can be pushed through a Concourse Pipeline! The pipeline will even trigger automatically when a new stemcell version, release version or deployment manifest version is available. Below is a screenshot of a very simple pipeline I build. This pipeline keeps both the software and the OS of my redis server up-to-date without me ever touching anything.

You can find the source files for this pipeline here. In real life you ‘d probably would want to add a few steps to this pipeline. First you deploy it to a test environment, then do some automated tests and then push it into production.

In summary: If you’re using BOSH not only do you get all the goodness of versioning and desired state config, it also enables you to employ Continuous Deployment for all your servers and software. You can even test new versions automatically so you don’t have to spent all your time just keeping your platform up-to-date.

What is Concourse CI?

This is my third blog in my “What is” series about different products that are part of the Cloud Foundry ecosystem. I discussed Cloud Foundry and BOSH earlier and now it’s time to for he next: What is Concourse CI?

So what is it?

The github tagline for the concourse project is “Continuous thing doer”. Which is quite accurate. Some would call it a Continuous Integration tool. It serves the same purpose as the well known tool Jenkins. It works quite different though. you can find a comparison between Concourse and and other CI tools here so I won’t go into details right now.

What is interesting to know though is that Concourse was born at Pivotal and is considered the standard CI tool for Cloud Foundry and related projects for a while now. The product was born out of necessity. Other tools just couldn’t deliver what the CF development teams needed. And what may even be more important: Other tools don’t follow the design principles all Pivotal and CF software is following. One of the most important ones being: “No snowflakes”.

Snowflake?

As you may know, each snowflake is different from other snowflakes. It’s unique. And that’s fine when we’re talking about real snowflakes. I’s not so fine when it concerns servers, especially if you’re running hundreds of them. If every server is special you have to run a backup for each one of them regularly, you need instructions on how to configure the server when it needs to be rebuild or DR’d. Troubleshooting becomes difficult because you don’t know how it needs to be configured. After all it’s different from all other servers so you have no reference.

In order to avoid snowflakes CF, BOSH and Concourse use text files to store configuration for Apps, Servers and Pipelines. If a server or app fails you can just blow it away and reload from the config file. Done.

If you are using Jenkins for your CI you probably did a lot of specific configuration on the Jenkins server. If you lost the server you would need to spent a lot of time re-configuring it or restore it from a backup. It’s different for Concourse Though. In concourse everything is stored in yml files. Concourse server is gone? Build a new one from scratch and reload your pipelines from yml files. You already know that works fine. After all that’s how the config got there in the first place.

Concourse concepts

Pipelines are first class citizen in Concourse CI. A CI pipeline is basically all the steps that need to be taken to get application code from the code repository all the way to production servers or at least a production release. Steps could be: download the code, build the code, run unit tests, run integration tests, deploy to Cloud Foundry.

concourse pipelines consist of resources and tasks. Jobs are used to compose resources and tasks. The pipline is describes in a yml file. Tasks can be describe in the same yml but are often described in external files. Since all this is stored in the same repo as the application code versioning tasks and pipelines becomes really easy.  For an example take a look at the ci folder in my demo app here. Below is a screenshot of what that pipeline looks like in the Concourse GUI

Screenshot from 2017-04-13 15-23-29

The online documentation for Concourse CI is excellent so I’ll be lazy and give you the link to the description of the concepts here in case you want to know more :).

Try it yourself

Before you run off and try it yourself let me tell you how to interact with Concourse. I already showed you the GUI. But know that the GUI is only intended to give a visual presentation of your pipelines. It is great so show on a big monitor in you dev teams office.

Creating the pipelines and some other configuration things are done through the fly cli. Which is nice, I hate taking my hands off the keyboard :).

If you want to try Concourse out for yourself then running the dockerized version is probably the fastest way to get going. If you read my blog post about BOSH and gave that a go yourself you might want to try to deploy Concourse using BOSH. To help you get started I shared my BOSH manifest below. I couldn’t get the HTTPS part working so I left that out for know.

 

 

BOSH on VMware Photon Platform

I explained both BOSH and the Photon platform in previous posts. I never did a post on how to deploy BOSH on vSphere but this document does a very good job describing the process. The only thing I want to add to that is: Don’t use “@” in your passwords! Cost me a day or so to figure out what was going wrong. In this post I will detail how to run BOSH on VMware Photon platform.

Update 19-04-2017: This post was based on Photon platform 1.1.1. As of today the current version is Photon platform 1.2. The steps in this post may or may not work for version 1.2.

Prepare Photon Platform

  1. Install Photon platform. This blog post details how you to do that.
  2. Make sure you have the photon cli installed. Instructions here.
  3. I’m going to assume that you don’t have anything configured on the photon platform yet. If you have you’ll probably already know what to do. I’ll also ussume this is a lab where you have full access.
  4. Connect the photon cli to you photon platform.
  5.  Create a photon tenant and tell the cli to use it (press enter on any questions to use the default)
  6. Create a network. I’m going to assume you use the default portgroup named “VM Network”. If not please substitute your network name in the command below.
  7. Create a resource ticket for the bosh environment. I didn’t find a way to deploy to other projects than the one you deployed the bosh director to. So make sure you create a big enough ticket to also fit the workloads you’ll be deploying with BOSH.
  8. Create a project that consumes the resources.
  9. Add some flavor.  Flavors are types of resources on offer on the Photon platform. It’s like AWS instance types.

Deploy BOSH

Install BOSH cli tools

To be able to install BOSH you’ll need the bosh-init tool. This tool is like a mini BOSH which is able to deploy BOSH. So it’s kinda like BOSH deploys itself. I won’t explain how to install bosh-init, the cloud foundry docs on this are pretty good. Please follow instructions here.

To be able to interact with a BOSH director once it’s deployed you’ll need the BOSH cli itself. In this case you’ll even need it before the BOSH director is running because it’s used to build the Photon CPI release. Again, find the cloud foundry docs on how to install the bosh cli here.

Prepare the Photon CPI

BOSH is able to work with a lot of different cloud (IaaS) providers and platforms. I already mentioned vSphere. But BOSH is also able to use vCloud, AWS, Google and Openstack. The magic that makes this multi-cloud solution possible is the Cloud Provider Interface or CPI.

VMware has published a CPI for Photon. It’s not published on the BOSH website yet but you can find it on github.  To be able to use the CPI you’ll want to install it into a BOSH director. How? Using a BOSH release of course. The Photon CPI BOSH release is here. Since there is no ready build  Photon CPI release we’ll have to build our own. Don’t be scared, it’s not that hard (disclaimer: I’m using Ubuntu. commands on a Mac should be  similar, not sure about window though). Here we go:

  1. Make sure you have the git client installed on your OS
  2. Create a folder to contain the CPI release and your deployment yml. I used ~/my-bosh/photon.
  3. cd into the folder you created
  4. Clone the Photon CPI release git repo, cd into the created folder and create the release:
  5. There’ll be a dev_releases folder in the bosh-photon-cpi-release folder now. Copy the cpi tgz file to ~/my-bosh/photon

Create BOSH manifest

deployments in BOSH are described in so called manifests. These are files in YAML format containing a bunch of settings. Each type of deployment has it’s own manifest and so does the bosh deployment itself.

You can find an example manifest for bosh with the photon CPI in the photon CPI release git repo.  I’ll share my own manifest below so you ‘ll have a feel of what it should look like with all the values populated. If you used the yml from my blog post to deploy photon  then you can use the my bosh manifest with just two changes:

  1. change the network id on line 39. The command to get the id is
  2. Change the photon project id on line 114. The command to get the id is

save the manifest yml to ~/my-bosh/photon/bosh-photon.yml

Run bosh-init deploy

Now you can finally start the deployment. It’s very simple 🙂

And now we wait 🙂

Use BOSH

Now that we deployed BOSH we might want to try to use BOSH for something useful. One of the simplest examples of something useful is deploying a redis server. Here are the steps involved:

  1. On the Photon platform create another resource ticket and a new project consuming the ticket.
  2. Target the bosh cli to the fresh BOSH director and login (if you’re using my yml the password is ‘password’
  3. run bosh status to confirm you’re connected and BOSH is up and running. Screenshot from 2017-04-04 16-44-17
  4. Upload the ubuntu trusty stemcell
  5. Upload the redis release
  6. Create a cloud-config YAML for BOSH. Below is my yml.
    1. Replace the project id on line 17
    2. Configure you ip range in lines 37..41
    3. Replace the network id in line 42
  7. Load the cloud config into bosh
  8. Create the redis deployment yaml. Again, below is my version of it.
    1. Replace the director_uuid. Retrieve the uuid by running bosh status
    2. Store the manifest in ~/my-bosh/photon/redis.yml
  9. Tell the bosh cli to use this manifest
  10. Now deploy redis

After the deployment is finished you can list the deployments and the VMs it deployed but running these commands

The output should be similar to this: Screenshot from 2017-04-04 19-19-46

Pfew….. if you made it this far: Congrats! you’re on your way to being a cloud native :).

What is BOSH?

In a previous post I went into what Cloud Foundry is and why you’d want to use it. What I didn’t go into was some of the magic behind the scenes. For infra minded people like myself this part might be even more exciting than the platform itself.  The thing that makes Cloud Foundry so robust and portal is called BOSH. So What is BOSH?

BOSH is a recursive acronym for “BOSH Outer Shell”. But that doesn’t tell you much about what it does. The bosh.io website explains: “BOSH is an open source tool for release engineering, deployment, lifecycle management, and monitoring of distributed systems.”

What does BOSH do?

It’s kinda hard to put BOSH in a certain box like “cloud management platform” or “software deployment tool”. BOSH does a lot of things: It deploys virtual machines, but it’s not strictly a virtual machine deployment tool. It deploys software but it’s not just a software deployment tool and last but not least it also monitors but it’s definitely not a monitoring tool.

It’s something better. BOSH deploys versioned software into a running infrastructure. The software needs a VM to run on so BOSH also deploys a VM. Once software is deployed it’s important that it keeps running. So BOSH also monitors the software and automatically heals the application when needed. If you accidentally delete a VM that’s part of a software deployment, BOSH will automatically redeploy the VM, install the software and rejoin the cluster.

BOSH components and concepts

A BOSH installation consists of the following componenst:

  • BOSH Director: This is what you could call the “BOSH Server”. It is the main part of the software that is responsible for orchestrating deployments and acting on health events.
  • BOSH Agent: This is a piece of software that runs on every VM deployed by BOSH. It is responsible for all the tasks that happen inside the VM.
  • CPI: The Cloud Provider Interface is a component that implements an API which enables BOSH to communicate with different types of infrastructure. There are CPIs for vSphere, vCloud, Google cloud, AWS and even for rackHD if you want to deploy to phyisical hardware. The CPI basically translates what BOSH wants to do to the specific cloud platform you want to deploy to.

When working with BOSH you’ll use the following constructs:

  • Stemcell: This is a bare bones virtual machine image that includes a BOSH agent. It’s a zip file with some descriptor fields and a machine image. Stemcells are platform specific. So there are stemcells for AWS, vSphere and so on. In the case of a vSphere stemcell you’ll simply find a VMDK packaged in a zip. You can download publicly available stemcells but you can also build your own if you want to.
  • Release: A BOSH release a a bundle of everything that is needed to deploy a specific application excluding the virtual machine templates. So it includes all runtimes, shared libraries and scripts that are needed to get the application running on a stemcell. There are public releases for a lot of opensource software including Cloud foundry.
  • Manifest: This is a YAML file that describes how stemcells and releases will be combined into a deployment. It describes the desired state. If you’re familiar with vRealize Automation, this is basically a blueprint.
  • Deployment: A deployment is basically the execution of a manifest. A deployment can contain many machines. When deploying, BOSH uses the manifest to determine the desired state. If you running the deployment BOSH will determine the current state and will do what is necessary to get to the desired state. This is contrary to what vRealize Automation does. When you change a vRA blueprint, that does not change any of the deployments. But if you change a BOSH manifest and run deploy again for that manifest BOSH will implement whatever changes you made to the desired state.

Can I try it?

Start out with bosh.io The documentation is quite good but the learning curve can be a bit steep. I hope to give you some pointers on how to get it running in another blog post soon.

What is Cloud foundry?

I recently did a talk at the Dutch VMUG UserCon in which showed how it it is to deploy software by using Cloud foundry. I recently mentioned it in a blog post as well. I also published a whitepaper on automation in which I mention Cloud Foundry. But I realized that in the VMware community Cloud Foundry might no
t be very well known and understood. Instead of telling you all to “just google it”, in this post I’ll try to answer the question “what is Cloud Foundry?”

So what is Cloud Foundry?

Let me start out with a quote from the Cloud foundry website: “Cloud Foundry is the industry standard cloud application platform that abstracts away infrastructure so you can focus on app innovation”.  So Cloud Foundry is a platform that can run your cloud apps for you.

What does that mean? It means Cloud Foundry (CF for short) is a collection of software that together forms a platform on top of your infrastructure. Developers can use the platform to deploy their software using simple command line tools or an API. The infrastructure and platform admins don’t have have knowledge about the software being deployed nor do they have to be involved in the deployment of the software.

What does it do for you?

The CF platform takes care of everything: A command line utility takes the source code of an app or even the compiled binary, uploads it to the platform. Then the platform will compile the code as needed. Then it will create a so called “droplet” which is conceptually similar to a Docker image. Everything that is needed to run the app is included in the droplet. So if you upload a war file it will automatically include a tomcat server, if you upload node.js code it wil include the node.js runtime. When the droplet is ready to run CF will start it in it’s “elastic runtime”.

But it doesn’t stop there. When it’s deployed CF will monitor the app and when it crashes it will automatically restart it. It can monitor the load on the application and automatically scale it out or in as needed. Developers can also manually scale their application with a few keystrokes.

Opensource

Cloud Foundry is completely opensource and has a very active community. There are commercial distributions out there but nothing stops you from running the opensource version. For free :).

CF support multiple platforms: vSphere, vCloud, AWS, Google Cloud and even Bare metal through RackHD!

But why?

So Why would you want to use Cloud Foundry?

  1. It makes the life of infrastructure operators (cloud Ops) really easy. When you’re using VMware’s vRealize Automation you’re probably used to writing tons of scripts and workflows just to make automated software deployment work. When you use Cloud Foundry you don’t have to do any of that.
  2. It makes the life of your developers very easy. They don’t have to tell infra operators how to deploy their software. They can simply do it themselves by running “cf push”.
  3. It enables a clear separation between DevOps and Cloud Ops. Developers can now deploy, run and operate their own app without bothering the infra team. The infra team on the other hand doesn’t have to spent time understanding and deploying the developers apps. So with CF it’s possible to have a Cloud Ops team that is responsible for the platform and the developer teams are not only responsible for developing their app but cat employ true DevOps. CF Gives them the tools to manage their own app in production.

Show me!

Want to see it for yourself? You can use a public Cloud Foundry platform if you want to try to deploy some code on it: run.pivotal.io

If you’re more interested in the platform itself checkout the Pivotal website. Pivotal provides a distribution of Cloud Foundry which makes it really easy to get going. Fair warning: you need a decent amount of disk and memory resources to run the whole platform.

The Why, What and how of Automation

Today my first ever whitepaper was published. It’s titled: The why, What and how of Automation. Here is the teaser:

The current digitization wave puts an ever increasing load on enterprise IT departments. At the same time the business is expecting shorter delivery times for IT services just to stay ahead of the competition. To keep delivering the right services on time enterprise IT needs a high degree of automation.

The whitepaper explains why automation is so important, what you need to automate and how this can be done. Those who attended my NLVMUG session might notice that this whitepaper has the same title as my presentation. That’s obviously not a coincidence. If you missed the session make sure to download and read the whitepaper here: http://itq.nl/the-why-what-and-how-of-automation/

I’ll be posting a few more blogs on some of the topics in the whitepaper as well so stay tuned :).

The right tool for the job

I work with vRealize Automation and vRealize Orchestrator on a daily basis. And I really enjoy doing so, especially the custom code development part. vRO gives a lot of flexibility and it’s not often that I’m unable to build what my customers need. Whetever the request I usually find a way to emply vRA and vRO in such a way that if fulfills the customers need. But more and more often do I wonder if we’re using the right tool for the job.

Today I presented a break-out session during the annual NLVMUG UserCon. In the presentation we emphasized the importance of using the right tool for the job. After all, you don’t drive a nail in the wall with a power drill. you can do so if you really want to but you’ll probably spent more time than needed putting up your new painting and likely destroy you power drill in the process. It’s similar in enterprise IT: You can use a customizable tool like vRA/vRO for nearly anything. But that doesn’t mean you should.

But if you can make it work anyway then why not? First of all: If you’re using a product to do something that it wasn’t originally intended to do you’ll spent a lot of time and money to make it do what you actually want.  But getting the product to do that is only the beginning. Now you need to maintain the product customizations. chances are something will break at the next product upgrade. So you postpone the upgrade, then postpone again and in the end the upgrade never happens because the risk is just too high.

Let me give an example: Let’s say you’re trying to deploy in-house developed code through different life cycle stages. You could argue that everything needs to run on a virtual machine so you start out by automating virtual machine deployment. You’ll probably use vRA or something similar to do that for you. After this first step you realize that the code does not run on an bare OS, you may need IIS or .NET or Java or a bunch of shared libraries. So you decide to automate the deployment of middleware software as well. But that still isn’t enough to run the code. You also need a database, a load balancer, an SSL certificate and last but not least: you need a way to deploy the code to your machines and configure the way it’s running.  Oh and of course all this needs to be triggered by the code repository and be completely self service. By the time you have implemented all this you’ll have written tons of custom installation scripts and integration workflows.

Automating code deployment can be tricky to say the least. And in my opinion all this difficulty stems from the fact that we’re starting with the VM as the unit of deployment. The actual unit of deployment is the code/application your developers are writing. By using the wrong data as input for the tool selection you ended up with the wrong tool.

Luckily there are tools designed for application deployment. One of them is called Cloud Foundry. If you use the Pivotal distribution you can set it up in a day or so. And then your developers can just run cf push and their code is running. In the cloud. Sounds a lot better than writing countless installation scripts and custom integrations doesn’t it? Also, the Cloud Foundry platform gives you loads of options you wouldn’t have out of the box with tools like vRA: auto-scaling, easy manual scaling, application health monitoring, service bindings, application statistics, centralized logging, custom logging endpoints and lots more.

There is one major “drawback” however: your applications need to be cloud native or 12factor apps. But you’ll have to transform your apps into cloud native apps at some point in the future anyways so why not start now?

 

Automated directory synchronization of the vRA Identity Manager

Disclaimer: The API documentation has not yet been released, therefor I would like to notice that this is currently an unsupported method of triggering a directory sync.

During a recent project the customer requested the functionality to create a new business group with just one click. This should be a function to onboard new teams into the vRA environment, including the creation of Reservations and Active Directory groups.

In vRA 6 this would not have been a problem at all, but starting at vRA 7 the Identity Manager was introduced. The Identity Manager, in short the connection from vRA to Active Directory (AD), synchronizes AD content on a specific schedule. This means that while specifying the different AD groups in the new Business Group, these will not be visible immediately but after a synchronization.

As the customer stated, it should be an automated process, a click on the button. Waiting for the synchronization to take place is not an option.. We are automating this, right?! Therefor my colleague Marco van Baggum (#vMBaggum blog) came up with the idea to automate the synchronization of the identity manager. In a shady corner Marco found the necessary API calls and off we go!

The first step is to create the a new HTTP-REST endpoint in vRO. Run the workflow “Add a REST host” located at Library / HTTP-REST / Configuration and use the following settings:

Name vRA
URL https://<vRA FQDN>/ e.g. https://itqlab-vra.itqlab.local/
Authentication NONE

* The other settings are dependent on how vRA is set-up and how vRO connects to it.

A new endpoint in the inventory should pop up at the HTTP-REST plugin. Now right click this endpoint and run the workflow to add the additional REST operations to it.

Name Get Directories
Method GET
URL template /SAAS/t/{tenant}/jersey/manager/api/connectormanagement/directoryconfigs

 

Name Get Directory Sync Executions
Method GET
URL template /SAAS/jersey/manager/api/connectormanagement/directoryconfigs/{directoryId}/syncexecutions

 

Name Invoke Directory Sync
Method POST
Content-type application/json
URL Template /SAAS/jersey/manager/api/connectormanagement/directoryconfigs/{directoryId}/syncprofile/sync

 

Name Login
Method POST
Content-type application/json
URL Template /SAAS/t/{tenant}/API/1.0/REST/auth/system/login

 

The images below show the configured operations in vRO

This slideshow requires JavaScript.

Now the endpoint and operations are created, import the workflow package attached to this post. (nl.itq.psi.vidm Workflows)

When the workflow package is imported, open the Configuration Elements tab and edit the Endpoints configuration element located under the ITQ folder. Select the correct HTTP-REST endpoint and REST-Operations, insert the correct username, password and tenant to connect to vRA. As a side-note, the used API calls can only be used with a vRA local account. Domain accounts will throw an “Invalid Credentials” error. Make sure that the user has rights to execute a Directory Sync in vRA.

Now go back to the workflow overview and expand ITQ / PSI / VIDM / Helpers. You should have the same overview as in the image below.

vRO Workflow structure

Now execute the “Synchronize active directory” workflow and the synchronization will start!

vRO Workflow execution
vRO Workflow execution

Please note that these workflows are not production ready yet and bugs may exist!

Download nl.itq.psi.vidm Workflows!

vRO Code – Finding VirtualMachines by Custom property

For the current project I’m involved in, I was asked to deliver a list of vRA deployed machines that have a Production status.

At first I have been writing a short piece of code that obtained all vRA managed machines and for each machine gathered the customer properties. Creating this workflow actually took less time than the execution itself as the environment has about 4200 managed objects. Next to the fact that this is time consuming to wait for, it will also generate a lot of load on the vRO service and the vRA IaaS API.

The developer in me felt like improving this and move the functionality to the vRA IaaS API, the API nevertheless has the custom properties linked to the virtual machine entity object. Eventually, after some research on ODATA queries and how to query for properties within linked entities, I was able to write the following ODATA filter:

Putting the filter and the vCAC IaaS plugin logic together will form the following script that can be used in either a workflow or an action:

To elaborate  a little bit on the code snippet above:

  • First the property and it’s value are being specified
  • The second step is to setup the filter with the property and value
  • The third step is to actually perform the call to vRA IaaS to return an array of vCAC:Entity based on the filter.
  • The last step in the code is to System.log() the names of the VirtualMachines that match the query.

When necessary to have vCAC:VirtualMachine objects instead of vCAC:entity objects change the last part of the code to:

 

Conclusion

Gathering virtualmachines based on specific properties can be a hassle using ODATA queries as in some cases it is not completely clear on how to structure the query. Eventually when the query is ready and working it shows to be much faster than creating a script to “hammer” the API for data.  The two screenshots below show the actually difference between the initial code and the improved code. The first screenshot is the original code, it errors out after 30 minutes of API calls. The second screenshot is a capture of the improved code, it runs for only 1 second to return the list of VirtualMachines matching the filter.

log get virtual machines by property and value error
First attempt ended up in an error returned by the vRA IaaS API after 30 minutes of performing API calls.

 

log get virtual machines by property and value
Second attempt with improved code. The runtime of the script is now only a matter of seconds.