All posts by Christiaan Roeleveld

Chris already works with virtualization products since 2004 and since this time he developed into a real expert. Chris works for ITQ and one of his passions is to share his knowledge and use all of his experience to find the best solution for his customers.

BOSH Release blobs

In my attempt to get Cloudfoundry running on Raspberry PIs I had to make some changes to a few BOSH releases. Most of the work involved swapping out blobs with other blobs. At first it wasn’t very clear to me how the blob store thingy in the BOSH releases work so I thought I’d be good to share what I learned.

Anatomy of a BOSH Release

A BOSH release consists of two main parts: Jobs and Packages. Jobs consist of a definition of how to start the software (monit and control files) and how to configure it (template files). Packages contain a script which takes care of compilation and installation of the software , a list of files needed for the package and a list of dependencies it might have on other packages.

The actual software or source code can be added to the bosh release in two ways: you can put it in the src folder (copy it or use submodules) in the release or add it as a blob. Files referenced in your BOSH release will be looked up in src first, then in blobs.

Where are the blobs anyways?

If you git clone the source of a BOSH release from github you’ll probably notice that you won’t get the blobs that are included in the release. So where did the blobs go? turns out that when you create a final release of you BOSH release the BOSH CLI will upload the blobs you added to your release to a publicly readable S3 bucket. When you then download the release source and create a dev release (bosh create-release –force) the BOSH CLI will lookup the bucket to use in config/final.yml and the list of blobs in config/blobs.yml and download the blobs to the blobs folder in the release dir.

How to add my own blobs?

Adding blobs to a release is easy: bosh add-blob <path to file> <relative path in blobstore> BOSH will copy the file to the blobs directory using the name you give as the last parameter. It will also create the entry in blobs.yml for you.

But this blog post is mostly written out of the perspective of working on a fork of an existing release. So imagine you create a fork of an existing BOSH release github project. You want to swap out some of the blobs that are already in the release and create a new release out of that.

If you’d just add the blobs en then run  bosh create-release --final or bosh upload-blobs  you’ll get an error because BOSH won’t be able to upload to the public (read-only) S3 bucket where the current blobs reside. You’ll need to copy all the blobs over to a bucket you’ll have control over. Here is how you do that:

  • fork and git clone the release repo
  • add your blobs using bosh add-blob
  • delete blobs that you no longer need using bosh delete-blob  or just remove the reference from blobs.yml
  • run: bosh create-release --force  this will force bosh to download all blobs to you local disk
  • Configure S3 (you’ll need an Amazon AWS account):
    • Create an S3 bucket and make it readonly for the world
    • Create a policy, group and user to assign RW permissions
    • Generate API key for this user
  • change config/final.yml to point to your bucket
  • create config/private.yml:

run bosh upload-blobs  or bosh create-release --final --tarball <path to output tgz>

That’s it. You now have your completely forked bosh release with you own blob store.

Baking Clouds!

For the last couple months I have been working on an experiment involving raspbery PIs (or PII?), BOSH and Cloudfoundry. The goal of this experiment is to run Cloudfoundry on one or more Raspberry Pis. My colleague Ruurd Keizer and I will be sharing our journey and demonstrate the result at the Cloudfoundry Summit in Boston in April!

Actually the goal is not to run all of the Cloudfoundry components on a raspberry but specifically the Diego Cell. Why you ask? Well, first of all I though it would be an interesting learning experience but also because I think alternative CPU architectures, especially ARM might take some of the datacenter market share from x86 CPU’s. Giving Cloudfoundry multi architecture support could make migrating from one architecture to another pretty easy. Mixing architectures in one Cloudfoundry platform would give platform consumers a choice between more powerfull x86 CPUs and a little less powerfull but also a lot less power hungry ARM CPUs. I believe that for a lot of workloads the less powerhungry ARM chips are a good fit. And saving a bit of energy helps save the planet so what’s not to like 🙂

Ok, enough about the why, I also want to tell a little bit about what I’m working on exactly. As I said the final goal is to run a Diego cell on a Raspberry Pi. But deploying Diego and Garden-RunC is usually done by BOSH. BOSH is designed to consume an IaaS like vSphere or AWS. There is support for physical machines through RackHD but It isn’t really possible to deploy Raspberries with RackHD.

So in order to be able to deploy Diego cells with BOSH to raspberries I started from the ground up, I build a power management thingy, a (P)IaaS, BOSH CPI and a Stemcell. It took me quite some time but surpringly it all works now :). Currently I am in the process of customizing BOSH releases so they will actually compile on a Pi. This mainly means swapping out the golang blob with the armv6l golang blob. But unfortunately in some releases there are more binary blobs that have to be replaced with their ARM counterparts.

I will be sharing more details on all of the components we build and changed. But for now we are too busy getting everything to actually work. The results so far are very promising and I truly expect we can demo a “cf push ” to a Pi during the CF Summit!

Stay tuned and see you in Boston!

About Cloud Foundry Service Brokers

Cloud Foundry offers consumers of the platform all kinds of backing services. Think of services like Mysql, Redis and RabbitMQ. Those services are offered to consumers through the Cloud Foundry marketplace.

To be able to create instances of the services in the marketplace and then bind them to an application, Cloud Foundry uses Service Brokers. A Service Broker implements the Cloud Foundry Open Service Broker API and takes care of provisioning of services. It also provides credentials to a service so an application can connect to the created service instance. The CF service broker API is a REST API specification. You can implement this API any way you like. Most service brokers seem to be written in either Golang or Ruby but it doesn’t realy matter in which language you implement the API. It doesn’t matter where and how you run it either. As long as the broker is reachable for the Cloud Controller Cloud Foundry will be able to consume it

In conversations with customers I noticed some misconceptions around service brokers so in this post I want to shed some light on what a service broker is and what it’s not.

Let me start out by listing what a Service Broker is NOT:

  • A Service Broker is not a reverse proxy of some kind
  • A service Broker is not a connector
  • A Service Broker is not a service in and of itself

So what does a service broker do?  Let’s walk through how a Cloud Foundry platform user would consume a Mysql database and map that to service brokers operations:

  • User lists content of the marketplace: cf marketplace
    • Cloud foundry will list all the services that are offered by registered service brokers
  • User creates a Mysql service instance: cf create-service mysql 100mb mydatabase
    • This command tells Cloud Foundry the user wants to consume to 100mb plan of the mysql service. The service will be referenced as “mydatabase” within Cloud Foundry. This won’t be the actual database name.
    • Cloud Foundry will call the “provision” API resource on the service broker that offers the mysql service
    • The service broker will now create a new database instance for the user and respond with an http 201 status back to Cloud Foundry
    • Cloud Foundry will save a reference to the service instance
  • Now the user wants to consume the database. He can do so by binding the created service the his application: cf bind-service myapplication mydatabase
    • Cloud Foundry will now do a call to the bind resource of the Mysql service broker API.
    • The broker will create a user for the mysql database and send a response to Cloud Foundry containing the connection details (URI, Username, Password) for the database server (not for the broker but the DB server itself).
    • Cloud Foundry takes the response and populates the VCAP_SERVICES environment variable for the application. This environment variable contains a JSON string with all the information of all the services bound to the app.
    • The app itself is responsible for parsing the json, getting the connection details and connecting to the database. From now the broker is no longer in the loop.

In summary: The broker presents services to Cloud Foundry, CF can request service plans from the broker and request connection details for created services. After that point the broker is out of the loop. It brokered the connection, now the application is directly connected to the service.

When a user no longer needs the service he can issue the cf unbind  command. This will remove the information from VCAP_SERVICES and tells the broker to initiate the unbind task. What exactly happens then depends on the broker but in the case of MYsql it will delete the user it created during the bind operation. After the unbind you can also issue a cf delete-service  command. This tells the broker to get rid of the service. In the case of Mysql it will delete the whole database.

In another post I will go into more detail on how to build you own broker.

Beyond automated deployment

I have been involved in quite a lot of automation projects over the last five years. All of them centered around VMware vRealize Automation and vRealize Orchestrator. During these projects customers throw all kinds of challenges at me. Most of which I can solve. Over the years however I found two challenges that go beyond automated deployment which I can’t really solve using vRA/vRO:

  1. If you update a vSphere template, how do you make sure all machines deployed from that template are also updated?
  2. If you change a blueprint, how do you make sure those changes are also made to existing deployments from that blueprint?

The answer two both really is: you can’t. Not If you’re using vRA/vRO. Dont’ get me wrong. I’m not trying to bash these products here. It’s just a result of how these products are designed and how they work.

In my opinion both problems boil down to the fact that in vRA blueprints you define the initial state of a deployment, not the desired state. So if you deploy a blueprint you get whatever was specified in that blueprint. Which is fine initially. But if you change the blueprint or update the template, nothing will be changed on the existing deployments. The other way around is true as well: If you change/damage your deployment, vRA won’t come in and fix it for you.

Now this seems obvious and not a big problem. After all: getting deployment times down from weeks to minutes using automation tools is a pretty good improvement in its own right. But if you think about it for a minute you’ll realize that when you have automated deployment, now you need to spent the rest of your days automating day 2 operations. After all the tool isn’t doing it for you.

For example you’ll have to introduce a tool which manages patches and updates on existing deployments. You also need to figure out a way to keep your template up-to-date, preferable automated. And if somebody breaks his deployment you need to spent time fixing it.

Now, if you’ve been following my blog recently you probably already guessed the solution to this problem: BOSH :). Here are four reason why BOSH makes your life as a platform operator easier:

  1. In BOSH a template is called a stemcell and stemcells are versioned. You don’t have to make you own, up-to-date versions of CentOS and Ubuntu stemcells are available online at bosh.io.
  2. When you’re using BOSH, software is installed on stemcells by using BOSH releases. Which are versioned, available online and actively maintained.
  3. A BOSH deployment defines a desired state. So if a VM disappears BOSH will just re-create it, re-install the software and attach the persistent disk. Also, when you update the deployment manifest to use a newer stemcell version, BOSH will just swap out the current OS disk with the new one in a few seconds and everything will still work afterwards.
  4.  All these parts can be pushed through a Concourse Pipeline! The pipeline will even trigger automatically when a new stemcell version, release version or deployment manifest version is available. Below is a screenshot of a very simple pipeline I build. This pipeline keeps both the software and the OS of my redis server up-to-date without me ever touching anything.

You can find the source files for this pipeline here. In real life you ‘d probably would want to add a few steps to this pipeline. First you deploy it to a test environment, then do some automated tests and then push it into production.

In summary: If you’re using BOSH not only do you get all the goodness of versioning and desired state config, it also enables you to employ Continuous Deployment for all your servers and software. You can even test new versions automatically so you don’t have to spent all your time just keeping your platform up-to-date.

What is Concourse CI?

This is my third blog in my “What is” series about different products that are part of the Cloud Foundry ecosystem. I discussed Cloud Foundry and BOSH earlier and now it’s time to for he next: What is Concourse CI?

So what is it?

The github tagline for the concourse project is “Continuous thing doer”. Which is quite accurate. Some would call it a Continuous Integration tool. It serves the same purpose as the well known tool Jenkins. It works quite different though. you can find a comparison between Concourse and and other CI tools here so I won’t go into details right now.

What is interesting to know though is that Concourse was born at Pivotal and is considered the standard CI tool for Cloud Foundry and related projects for a while now. The product was born out of necessity. Other tools just couldn’t deliver what the CF development teams needed. And what may even be more important: Other tools don’t follow the design principles all Pivotal and CF software is following. One of the most important ones being: “No snowflakes”.

Snowflake?

As you may know, each snowflake is different from other snowflakes. It’s unique. And that’s fine when we’re talking about real snowflakes. I’s not so fine when it concerns servers, especially if you’re running hundreds of them. If every server is special you have to run a backup for each one of them regularly, you need instructions on how to configure the server when it needs to be rebuild or DR’d. Troubleshooting becomes difficult because you don’t know how it needs to be configured. After all it’s different from all other servers so you have no reference.

In order to avoid snowflakes CF, BOSH and Concourse use text files to store configuration for Apps, Servers and Pipelines. If a server or app fails you can just blow it away and reload from the config file. Done.

If you are using Jenkins for your CI you probably did a lot of specific configuration on the Jenkins server. If you lost the server you would need to spent a lot of time re-configuring it or restore it from a backup. It’s different for Concourse Though. In concourse everything is stored in yml files. Concourse server is gone? Build a new one from scratch and reload your pipelines from yml files. You already know that works fine. After all that’s how the config got there in the first place.

Concourse concepts

Pipelines are first class citizen in Concourse CI. A CI pipeline is basically all the steps that need to be taken to get application code from the code repository all the way to production servers or at least a production release. Steps could be: download the code, build the code, run unit tests, run integration tests, deploy to Cloud Foundry.

concourse pipelines consist of resources and tasks. Jobs are used to compose resources and tasks. The pipline is describes in a yml file. Tasks can be describe in the same yml but are often described in external files. Since all this is stored in the same repo as the application code versioning tasks and pipelines becomes really easy.  For an example take a look at the ci folder in my demo app here. Below is a screenshot of what that pipeline looks like in the Concourse GUI

Screenshot from 2017-04-13 15-23-29

The online documentation for Concourse CI is excellent so I’ll be lazy and give you the link to the description of the concepts here in case you want to know more :).

Try it yourself

Before you run off and try it yourself let me tell you how to interact with Concourse. I already showed you the GUI. But know that the GUI is only intended to give a visual presentation of your pipelines. It is great so show on a big monitor in you dev teams office.

Creating the pipelines and some other configuration things are done through the fly cli. Which is nice, I hate taking my hands off the keyboard :).

If you want to try Concourse out for yourself then running the dockerized version is probably the fastest way to get going. If you read my blog post about BOSH and gave that a go yourself you might want to try to deploy Concourse using BOSH. To help you get started I shared my BOSH manifest below. I couldn’t get the HTTPS part working so I left that out for know.

 

 

VMware Photon Platform 1.2 released

Yesterday VMware silently released a new version of its opensource cloud native platform. VMware Photon Platform 1.2 is available for download at github now. You can find the details of the new release in the release notes. Below are the highlights of the new release.

What’s new?

  • Photon Controller now supports ESXi 6.5 Patch 201701001. Support for ESXi 6.0 is dropped.
  • Photon platform now comes with Lightwave 1.2 which supports authenticating using windows sessions credentials. Given you’re using the CLI from a windows box.
  • The platform now supports Kubernetes 1.6 and also supports persistent volumes for Kubernetes
  • NSX-T support is improved
  • Resource tickets have been replaced with quotas which can be increased and decreased. This is a big improvement in my opinion. The previous release wouldn’t let you change resource allocation which was a definite blocker for production use.
  • The API is now versioned. Which means the API url now starts with /v1/

What’s broken?

  • Lightwave 1.2 is incompatible with earlier versions
  • ESXi 6.0 is no longer supported
  • The API is incompatible with previous API versions. But the good new is that it’s now versioned so this was the last time they broke the API (hopefully).

update 20-04-2017: Some updates taken from the github issues

  • HA Lightwave setup is no longer supported. Will be back in 1.2.1
  • version 1.1.1 didn’t create any flavours at installation but 1.2 seems to create duplicate flavours.

BOSH on VMware Photon Platform

I explained both BOSH and the Photon platform in previous posts. I never did a post on how to deploy BOSH on vSphere but this document does a very good job describing the process. The only thing I want to add to that is: Don’t use “@” in your passwords! Cost me a day or so to figure out what was going wrong. In this post I will detail how to run BOSH on VMware Photon platform.

Update 19-04-2017: This post was based on Photon platform 1.1.1. As of today the current version is Photon platform 1.2. The steps in this post may or may not work for version 1.2.

Prepare Photon Platform

  1. Install Photon platform. This blog post details how you to do that.
  2. Make sure you have the photon cli installed. Instructions here.
  3. I’m going to assume that you don’t have anything configured on the photon platform yet. If you have you’ll probably already know what to do. I’ll also ussume this is a lab where you have full access.
  4. Connect the photon cli to you photon platform.
  5.  Create a photon tenant and tell the cli to use it (press enter on any questions to use the default)
  6. Create a network. I’m going to assume you use the default portgroup named “VM Network”. If not please substitute your network name in the command below.
  7. Create a resource ticket for the bosh environment. I didn’t find a way to deploy to other projects than the one you deployed the bosh director to. So make sure you create a big enough ticket to also fit the workloads you’ll be deploying with BOSH.
  8. Create a project that consumes the resources.
  9. Add some flavor.  Flavors are types of resources on offer on the Photon platform. It’s like AWS instance types.

Deploy BOSH

Install BOSH cli tools

To be able to install BOSH you’ll need the bosh-init tool. This tool is like a mini BOSH which is able to deploy BOSH. So it’s kinda like BOSH deploys itself. I won’t explain how to install bosh-init, the cloud foundry docs on this are pretty good. Please follow instructions here.

To be able to interact with a BOSH director once it’s deployed you’ll need the BOSH cli itself. In this case you’ll even need it before the BOSH director is running because it’s used to build the Photon CPI release. Again, find the cloud foundry docs on how to install the bosh cli here.

Prepare the Photon CPI

BOSH is able to work with a lot of different cloud (IaaS) providers and platforms. I already mentioned vSphere. But BOSH is also able to use vCloud, AWS, Google and Openstack. The magic that makes this multi-cloud solution possible is the Cloud Provider Interface or CPI.

VMware has published a CPI for Photon. It’s not published on the BOSH website yet but you can find it on github.  To be able to use the CPI you’ll want to install it into a BOSH director. How? Using a BOSH release of course. The Photon CPI BOSH release is here. Since there is no ready build  Photon CPI release we’ll have to build our own. Don’t be scared, it’s not that hard (disclaimer: I’m using Ubuntu. commands on a Mac should be  similar, not sure about window though). Here we go:

  1. Make sure you have the git client installed on your OS
  2. Create a folder to contain the CPI release and your deployment yml. I used ~/my-bosh/photon.
  3. cd into the folder you created
  4. Clone the Photon CPI release git repo, cd into the created folder and create the release:
  5. There’ll be a dev_releases folder in the bosh-photon-cpi-release folder now. Copy the cpi tgz file to ~/my-bosh/photon

Create BOSH manifest

deployments in BOSH are described in so called manifests. These are files in YAML format containing a bunch of settings. Each type of deployment has it’s own manifest and so does the bosh deployment itself.

You can find an example manifest for bosh with the photon CPI in the photon CPI release git repo.  I’ll share my own manifest below so you ‘ll have a feel of what it should look like with all the values populated. If you used the yml from my blog post to deploy photon  then you can use the my bosh manifest with just two changes:

  1. change the network id on line 39. The command to get the id is
  2. Change the photon project id on line 114. The command to get the id is

save the manifest yml to ~/my-bosh/photon/bosh-photon.yml

Run bosh-init deploy

Now you can finally start the deployment. It’s very simple 🙂

And now we wait 🙂

Use BOSH

Now that we deployed BOSH we might want to try to use BOSH for something useful. One of the simplest examples of something useful is deploying a redis server. Here are the steps involved:

  1. On the Photon platform create another resource ticket and a new project consuming the ticket.
  2. Target the bosh cli to the fresh BOSH director and login (if you’re using my yml the password is ‘password’
  3. run bosh status to confirm you’re connected and BOSH is up and running. Screenshot from 2017-04-04 16-44-17
  4. Upload the ubuntu trusty stemcell
  5. Upload the redis release
  6. Create a cloud-config YAML for BOSH. Below is my yml.
    1. Replace the project id on line 17
    2. Configure you ip range in lines 37..41
    3. Replace the network id in line 42
  7. Load the cloud config into bosh
  8. Create the redis deployment yaml. Again, below is my version of it.
    1. Replace the director_uuid. Retrieve the uuid by running bosh status
    2. Store the manifest in ~/my-bosh/photon/redis.yml
  9. Tell the bosh cli to use this manifest
  10. Now deploy redis

After the deployment is finished you can list the deployments and the VMs it deployed but running these commands

The output should be similar to this: Screenshot from 2017-04-04 19-19-46

Pfew….. if you made it this far: Congrats! you’re on your way to being a cloud native :).

What is BOSH?

In a previous post I went into what Cloud Foundry is and why you’d want to use it. What I didn’t go into was some of the magic behind the scenes. For infra minded people like myself this part might be even more exciting than the platform itself.  The thing that makes Cloud Foundry so robust and portal is called BOSH. So What is BOSH?

BOSH is a recursive acronym for “BOSH Outer Shell”. But that doesn’t tell you much about what it does. The bosh.io website explains: “BOSH is an open source tool for release engineering, deployment, lifecycle management, and monitoring of distributed systems.”

What does BOSH do?

It’s kinda hard to put BOSH in a certain box like “cloud management platform” or “software deployment tool”. BOSH does a lot of things: It deploys virtual machines, but it’s not strictly a virtual machine deployment tool. It deploys software but it’s not just a software deployment tool and last but not least it also monitors but it’s definitely not a monitoring tool.

It’s something better. BOSH deploys versioned software into a running infrastructure. The software needs a VM to run on so BOSH also deploys a VM. Once software is deployed it’s important that it keeps running. So BOSH also monitors the software and automatically heals the application when needed. If you accidentally delete a VM that’s part of a software deployment, BOSH will automatically redeploy the VM, install the software and rejoin the cluster.

BOSH components and concepts

A BOSH installation consists of the following componenst:

  • BOSH Director: This is what you could call the “BOSH Server”. It is the main part of the software that is responsible for orchestrating deployments and acting on health events.
  • BOSH Agent: This is a piece of software that runs on every VM deployed by BOSH. It is responsible for all the tasks that happen inside the VM.
  • CPI: The Cloud Provider Interface is a component that implements an API which enables BOSH to communicate with different types of infrastructure. There are CPIs for vSphere, vCloud, Google cloud, AWS and even for rackHD if you want to deploy to phyisical hardware. The CPI basically translates what BOSH wants to do to the specific cloud platform you want to deploy to.

When working with BOSH you’ll use the following constructs:

  • Stemcell: This is a bare bones virtual machine image that includes a BOSH agent. It’s a zip file with some descriptor fields and a machine image. Stemcells are platform specific. So there are stemcells for AWS, vSphere and so on. In the case of a vSphere stemcell you’ll simply find a VMDK packaged in a zip. You can download publicly available stemcells but you can also build your own if you want to.
  • Release: A BOSH release a a bundle of everything that is needed to deploy a specific application excluding the virtual machine templates. So it includes all runtimes, shared libraries and scripts that are needed to get the application running on a stemcell. There are public releases for a lot of opensource software including Cloud foundry.
  • Manifest: This is a YAML file that describes how stemcells and releases will be combined into a deployment. It describes the desired state. If you’re familiar with vRealize Automation, this is basically a blueprint.
  • Deployment: A deployment is basically the execution of a manifest. A deployment can contain many machines. When deploying, BOSH uses the manifest to determine the desired state. If you running the deployment BOSH will determine the current state and will do what is necessary to get to the desired state. This is contrary to what vRealize Automation does. When you change a vRA blueprint, that does not change any of the deployments. But if you change a BOSH manifest and run deploy again for that manifest BOSH will implement whatever changes you made to the desired state.

Can I try it?

Start out with bosh.io The documentation is quite good but the learning curve can be a bit steep. I hope to give you some pointers on how to get it running in another blog post soon.

Getting started with VMware Photon Platform

VMware Photon Platform is an opensource cloud platform build by VMware on top for ESXi. It is specifically build to run containerized and cloud native applications. As such it pushes a lot of features into the application layer and out of the infrastructure. For example: It doesn’t support VMware HA or DRS. Or even vMotion. In this post I’ll help you getting started with VMware Photon Platform.

Update 19-04-2017: This post was based on Photon platform 1.1.1. As of today the current version is Photon platform 1.2. The only supported ESXi version is now ESXi 6.5, Patch 201701001. The steps in this post may or may not work for version 1.2.

The platform

The Photon platform contains a few different components:

  • Photon installation appliance: Deploy this appliance first an use it to deploy other photon components
  • Lightwave: This is similar to VMware SSO
  • Photon Controller: This is basically a vCenter replacement. It has a scale-out architecture and provides the Photon API, multi tenancy and resource management
  • HA Proxy: Loadbalances requests to the Photon Controllers
  • Photon OS: A tiny Linux distribution optimized to run Docker containers
  • Photon Agent: This is running on each ESXi host managed by Photon controller

Photon supports the following VMware technologies:

  • vSAN: aggregate your local disks into a large storage pool. Since there is no vCenter server in a photon deployment you need an additional appliance to manage vSAN
  • NSX: Photon integrates with VMwares SDN platform. But again: not vCenter. So you’ll only be able to use NSX-T, not the wel known NSX-v

Getting Photon Platform up and running

There is a quickstart guide which gives you most information you need t deploy Photon Platform. Use the steps below to save some time and fill in some blanks.

Prepare your lab

  1. Download the installer OVA here.
  2. Download ESXi 6.0.0 here (note: 6.5 is not supported at the moment of writing)
  3. Download patch with build number4600944 here (yes, photon only supports this specific build nr sadly…)
  4. Install two ESXi 6.0.0 hosts. I run them as virtual machines on my home lab. DO NOT CONNECT THEM TO A VCENTER!
  5. Both ESxi hosts need a local or shared datastore If you’re following my instruction you’ll have to name them “local02”. I used 150GB datastores which is sufficient to deploy the Photon components on one host. I have 23.4GB left on host running the platform.
  6. SCP the patch to the fresh hosts and use this KB article for instructions on how to deploy the patch
  7. Make sure you have at least 1 static IP available in the network where you’ll be deploying Photon. Obviously that IP should be able to reach the ESXi hosts

Deploy Photon

  1. Deploy the photon-installer ova file to one of the ESXi hosts. Just use the good ol’ vSphere C# client :). The quickstart guide mentions the web client but there is no webclient on ESXi 6.0…. Of course you can use the web client fling but that would add another step to this process.
  2. Prepare a YAML file. The quickstart guide describes the file you need.
    1. One thing the guide doesn’t mention is the fact that you need a complex password of at least 8 characters for the lightwave administrator. If you don’t the installer won’t throw an error, the installation of lightwave will just fail with a very generic error.
    2. something that is in de quickstart guide but I missed at first is the fact that all components need to use the lightwave server as their DNS server. Only the lightwave server itself uses your own DNS server.
    3. Below is the YAML I used. You’ll probably have to replace the IP addresses and it assumes that the root password for your ESXi hosts is “password”.  It also assumes that your ESXi hosts have a datastore called “local02”. another thing you might notice: I’m not joining the host where the photon appliances are deployed to the photon controller. Somehow I can’t get that to work.
  3. Save the yml above to a file and copy it to the photon installer appliance. The root password for the appliance is “changeme”. I stored the file in /root/photon.yml
  4. Log into the photon installer appliance over SSH (root/changeme)
  5. run: cd /opt/vmware/photon/controller/bin
  6. run: ./photon-setup platform install -config /root/photon.yml
  7. watch the magic happen 🙂
  8. when the magic is finished connect a browser to the loadbalancer ip. If you used my yml go to: https://192.168.192.76:4343Screenshot from 2017-04-04 13-15-07
  9. Log in using the lightwave administrator credentials. If you used my yml that would be: administrator@photon.lab / Passw0rd123!
  10. Tadaa:   Screenshot from 2017-04-04 13-17-56
  11. The GUI is nice but a lot of features are still missing. If you want to use photon you’ll need the CLI. you can find it on the Github releases page and here are instructions on how to install it.

Using Photon

This post is lengthy enough as it is so I won’t go into details here. One of the features of Photon is that it can deploy a Kubernetes cluster for you.  I’m also working on a post explaining how to use BOSH with photon.

 

NLVMUG UserCon Session: The Why, What and How of Automation

On March 16th the Dutch VMUG UserCon took place. Again a big event with around 1000 attendees. And again I had the honor to fill one of the breakout sessions. This year I presented with my co-worker Ruurd Keizer. Our session was titled: “The Why, What and How of Automation”.

In this session we talked about digitization, the differences between power tools and factories, containers, Cloud Foundry and more.

The recording of our session is now available. It’s in Dutch, no subtitles. But the Demos are towards the end so feel free to skip the first part if you just want to watch the awesomeness 🙂

This presentation also inspired a whitepaper which you can find here.