ITQ becomes Pivotal Partner

For a long time, Pivotal and ITQ have been partners in spirit. We were having very similar views on the future of development, platforms, and (multi) cloud. Today, we’re excited to announce Pivotal and ITQ have made this partnership official! ITQ joins Pivotal’s Ready Partner Program as one of the first European partners.

Over 2 years ago ITQ started on the cloud native journey, by recognizing real DevOps doesn’t work by just talking about it. What is needed is a platform to make Dev and Ops work together, and modern software development methodologies to iterate quickly and bring value to business every day. Immediately a match was found in Pivotal, a San Francisco based software and services company, which maintains the open source Cloud Foundry platform. A DevOps enablement platform which is natively multi cloud ready.

Today we have our cloud-native applications team building new and replatforming legacy applications to leverage the cloud’s full potential while we build and design Cloud Foundry installations on existing Software-Defined Data Centers so our customers are ready for the multi cloud future.

Now that Pivotal’s Ready Partner Program has been opened to European partners, the unwritten partnership is made official as ITQ joins the program.

We are looking forward to work together so we can convert our joint vision of the future into reality.

Cloud Native @VMworld Barcelona

It’s starting to become some sort of a tradition: like the last few years Barcelona is the venue of the European leg of VMworld. As became clear when we visited VMworld US last month, there is a lot of attention for a new movement in the world of IT: cloud native applications. During VMworld Barcelona, VMware’s vision with respect to this movement is elaborated on.

Cloud native

Some years ago, the ability to deliver software was a differentiator. Today, it’s a basic requirement for survival. Just one example are sound systems which are bought primarily because they conveniently and wirelessly play music streamed from an online service (e.g. Spotify) everywhere in your home. We kind of trust the sound quality – the core business – will be good enough, and buy the product for the software defined extras.
So when everyone can deliver software, the new differentiator becomes speed to market. It’s about getting an idea into production as fast as possible without throwing resilience and stability out of the window. Applications developed this way are termed ‘cloud native’, and the success of companies leading the way (Netflix, Google) is tantamount.

Applications? Didn’t VMware do infrastructure?

And VMware still does. Where VMware uses the term ‘cloud native applications’ it’s essentially still about infrastructure. An essential ingredient to go fully cloud native is to install a Platform-as-a-Service (PaaS) on top of the virtualized infrastructure. VMware’s announcements like vSphere integrated containers and the Photon platform are meant to facilitate this process.

Dev + Ops, and the end of shadow IT

One of the advantages of having a PaaS platform is the API based collaboration: instead of letting developers submit tickets for getting machines to run their code, the IT department can reserve some set amount of resources, and let developers host applications on them by talking to an API.

This way, the traditional wall between Dev and Ops, responsible for resource allocation times in the order of days or even weeks, is broken down. Developers will no longer feel the need to find refuge in 3rd party PaaS/IaaS platforms like AWS, which has the problem of creating a ‘shadow IT’ over which nobody has control.

First steps: vSphere integrated containers

vSphere integrated containers are a new technology which makes (Docker) containers visible directly in vSphere, the way Virtual Machines (VMs) are right now. This enables applications to be run as containers inside the existing infrastructure, with all the management and monitoring options we are used to for VMs.

A mature solution: Photon platform

Although vSphere integrated containers are a full-fledged solution for running apps as containers on existing infrastructure, running containers is not the final goal. The real goal of the cloud native and DevOps movements is to rapidly go from idea to production, and a real PaaS platform is required to get to this goal.

In principle, it’s possible to create a homebrew platform yourself using various open source and proprietary parts and bolt them together with software. A lot of companies are in fact already experimenting with this. However, for real production use it’s more likely companies will not try and build their own car but instead buy a battle tested, production ready ‘it just works’ platform such as Pivotal Cloud Foundry: an enterprise ready PaaS platform which includes all the essential ingredients like dynamic routing, application health management and zero downtime platform updates.
The Photon platform enables seamless integration between the virtualized infrastructure (vSphere) and PaaS platforms like Pivotal Cloud Foundry.

What’s in it for me?

As said above, the next few years will be crucial: are you able to keep up with the wave of disruptive newcomers who see IT as an ‘enabler’ and who can go from idea to production in days, or do you keep to the old thinking in which IT is mostly a cost?
VMware is tapping into the cloud native movement, and for the first time makes it possible to jump on the bandwagon without having to reinvent the wheel.

VMworld 2015: beyond virtualization

What do you base your selection on when buying some piece of technology? Is it the core functionality, or the added features?

As Kit Colbert aptly stated in his VMworld DevOps program Keynote, customers at this point implicitly assume the core functionality of almost any given product will be alright, and base their choices on the extras:

  • when selecting a new home audio set, you select it based on for instance connectivity, wireless options and easy of use. Actual audio quality is perhaps the #10 item on the list
  • a lot of companies make decent tractors, but some (e.g. John Deere) set themselves apart and do great by adding integrated options such as support for GPS (driving in straight lines)
  • the hypervisor used for virtualization was once the unique selling point, where people now buy virtualization suites based on supporting functionalities, e.g.: High Availability, virtualized layer 2 networking (NSX), Dynamic Resource Scheduling

Smart existing companies have recognized this trend of commoditization of the core functionality, which results in a huge drive from the business to add more extra value fast while staying safe and reliable, all in order to stay competitive with the army of disruptive startups coming for (a piece of) the cake.

Developers have been used to working with the short iteration cycles intrinsic to Agile development for years now, since apart from adding value to the business quickly it has the additional benefits of risk reduction and adaptability:

Agile value proposition

Agile value proposition

However, this mode of operation is asking a lot from traditional IT departments as historically IT operations is focused on reliability of infrastructure: a characteristic seemingly best served by no changes ever – diametrically opposed to adding new features on a daily basis.

This has given rise to the waterscrumfall phenomenon: new features developed with short iteration cycles (scrum/agile) will still have to wait for the biannual release weekend to hit production (waterfall), thereby eliminating most of the advantages gained by adopting agile methods in development.

It goes without saying waterscrumfall is not a desirable situation to be in, and therefore people have been experimenting with the logical extension to Agile development to the whole pipeline: the DevOps movement.


DevOps has perhaps over 9000 alternative definitions. The most important thing to note though, is that DevOps is a mix of culture, process and supporting technology. You can’t buy DevOps, and you can’t implement it.

Adopting DevOps requires a permanent push towards a different mindset which enables you to bring changes to production fast, at scale and in a reliable way. There are however some technologies that can help you enforce and enable DevOps principles. It’s here were the most exciting developments took place at VMworld 2015.

Overview of the VMware cloud native stack

Overview of the VMware cloud native stack

Unified platform: vSphere integrated containers

Interaction between Operations and Development runs most smoothly if Developers don’t have to file tickets for virtual machines, but instead use an API to request some compute resources to run their code. This is where containers come in: originally devised as an Operating System (OS) level virtualization technology, their main popularity these days is not the result of OS overprovisioning capabilities but rather of their ability to serve as shipping vehicles for code enabling reproducible deployment.

The output of a Continuous Integration (CI) process is known as a build artifact. Where usually this is a .war/binary/.zip file, the more modern approaches use containers. Ideally, the next stage of the process – Continuous Deployment (CD) – would subsequently push the container to a container engine (e.g.: Docker) which can schedule it. vSphere integrated containers allow this exact mechanism which nicely seperates Operations and Development concerns:

  • Ops can define special resource pools – Virtual container hosts (VCH) – to keep tabs on the resources available to containerized workloads
  • vSphere exposes a Docker Engine API, which Devs can use to schedule container workloads to a VCH. When a container is scheduled, a Virtual Machine (VM) is forked (instant cloned) to run this workload
vSphere integrated containers

vSphere integrated containers

Note that since the container is running on a VM in a 1:1 relation, the VM is not important here. It just provides the isolation and scheduling features to the container: the first class citizen of the data center – from the perspective of the developer – is the container itself. At the same time, because of the 1:1 mapping, Ops can monitor and manage the just enough VM (jeVM) in the same ways they would legacy workloads.

Continuous Delivery: vRealize Code Stream

Most development teams have some kind of Continuous Integration set up by now, which generates automated builds on a clean system, tests the build and stores the build artifact. The next phase which is pushing the artifact to test, user acceptance test, and ultimately production is not usually done in an automated way in the traditional enterprise environment as this phase requires Ops cooperation to set up – and as described above – this is where traditionally 2 worlds collide.

However, reproducible and therefore automated deployment is essential if you want to work with fast as well as safe pushing of new features into production. Therefore, companies today can only survive the onslaught of disruptive newcomers if they set up some sort of Continuous Delivery practice.

This is where vRealize Code Stream comes in: when a build artifact in the form of a container is output from the Continuous Integration phase of the pipeline, vRealize Code Stream pulls it in and takes care of the Continuous Delivery part in an automated way based on (user defined) rules, checks and tests.

vRealize Code Stream Continous Delivery Automation

vRealize Code Stream Continous Delivery Automation

Integration with cloud native platforms: Photon platform

Scheduling a container directly on vSphere using integrated containers is a great start, but it will not be the typical use case for new applications in production environments. Problems such as scaling, scheduling, dynamic routing and load balancing are universal and so unless you want to reinvent the wheel (a very common developer pastime), it’s much more convenient to use a cloud native application platform to deploy applications. Platforms such as Kubernetes, mesos, docker swarm and Pivotal Cloud Foundry take care of the scheduling, scaling and dynamic routing automatically.

Photon Platform architecture

Photon Platform architecture

At VMworld, VMware announced the missing link for landing cloud native platforms on vSphere – Photon platform – a multi tenant control plane for provisioning next generation (cloud native) application platforms.

Integrated containers or Photon platform?

Integrated containers vs. Photon platform

Integrated containers vs. Photon platform

Cloud native architecture is the future, but applications need to be designed to be cloud native (12 factors), and most existing applications are just not ready. So basically it comes down to this:

  • cloud native applications ⇒ cloud native platform using Photon platform
  • ‘legacy’ applications ⇒ vSphere, with packaging as container if possible

Note that for large applications, it doesn’t have to be one or the other: realistic migrations of existing applications will likely keep a core monolithic/legacy part hosted on the traditional platform, with new extensions or refactored bits – for which a business case can be made – as cloud native applications.

Pivotal Cloud Foundry partnership

Pivotal Cloud Foundry (PCF) is just one of the cloud platforms that can be provisioned on vSphere with Photon controller, so why special attention for PCF? From the VMware perspective this seems obvious: VMware owns Pivotal Software, sure they like to see them do well, there’s $$$ in it.

However, from the impartial enterprise perspective there is a very good case to make for PCF as well:

  • it’s the only platform that has support for enterprise concepts like organisations, projects, spaces, user management
  • it’s the only platform which strongly drives home the distinction between platform operations (managing and monitoring the cloud platform itself) and application operations (managing and monitoring the apps)
  • it’s a structured/opinionated platform which enforces DevOps principles – as opposed to unstructured (more freedom a.k.a. more chaos/management hell) platforms such as Kubernetes and mesos
Pivotal: enabling DevOps

Pivotal: enabling DevOps

Ergo: it’s the only platform right now that’s good enough for general purpose enterprise production use, and it’s the only platform that ‘just works’ on vSphere.

VMware and the commoditization of virtualization

Technology aside, VMworld 2015 was interesting because VMware is in somewhat of a bind: the hypervisor – once the sole reason for buying VMware – has become a commodity. The reason for choosing vSphere is nowadays the management, monitoring and automation suite around it. However, disruptive newcomers are using DevOps and cloud native architectures, and coming from a development background myself, I can see why they are the future, and I was sure there are enough intelligent people at VMware to recognize this as well.

So VMware had to move, and after following the DevOps and cloud native tracks and talking to Kit Colbert privately it became very obvious they are in fact moving.

However, VMware has a strong customer base in Ops in organizations which aren’t known for their aptitude to change; perhaps the willingness to change technology is there, but real change is needed especially on the culture and process fronts in order to keep up.

So it’s pretty clear: VMware realizes exactly what needs to happen, the difficulty is in determining the right pace for change: if they go too fast they alienate their current customer base, and if they go to slow they become legacy themselves. A real balancing act, but the proposition is strong.