Combining ADAL JS with role-based security in ASP.NET Web API

In October 2014, Vittorio Bertocci introduced ADAL JavaScript. This library makes it possible for single-page-apps to use Azure Active Directory authentication from within the browser. ADAL JS uses the OAuth2 Implicit Grant for this. The introductory blog post and an additional post when v1 was released explain in detail how to configure and use the library.

One interesting ‘limitation’ of implicit grants is that the access token you receive once you’re authenticated, is returned in a URL fragment. This limits the amount of information that can be stored inside the token since URL’s have a limited length (this varies per browser and server). This means that the token does not contain any role information, otherwise the URL might become too long. So even when you’re a member of one or more groups in Azure AD, this information will not be exposed through the access token.

So, what to do when you actually wanted to use role-based authorization in your backend API? Luckily, there is the Azure AD Graph API for that1. It allows you to access the users and groups from your Azure AD tenant. The flow is then as follows:

  1. ADAL JS sees the current user is not authenticated and redirects the browser to the configured Azure AD endpoint.
  2. The user authenticates and a token is returned to the browser in a URL fragment. ADAL JS extracts some information from the token for use by a client-side script and stores the token itself in session storage or local storage (this is configurable). Note that ADAL JS does not actually validate the token, this is the backend’s job.
  3. On subsequent requests to the backend API (in my case an ASP.NET Web API), the token is sent along in the Authorization header as a bearer token.
  4. In the backend API the token is validated and during the validation process, we use the Graph API to get more information about the user: the groups he or she is a member of.
  5. The groups are added as role claims to the authenticated principal.

In code, this looks like this. I use the aptly named extension method UseWindowsAzureActiveDirectoryBearerAuthentication from the Microsoft.Owin.Security.ActiveDirectory NuGet package to add the necessary authentication middleware to the Owin pipeline. I left out some of the necessary error handling and logging.

// Apply bearer token authentication middleware to Owin IAppBuilder interface.
private void ConfigureAuth(IAppBuilder app)
{
  // ADAL authentication context for our Azure AD tenant.
  var authenticationContext = new AuthenticationContext(
    $"https://login.windows.net/{tenant}", validateAuthority: true, TokenCache.DefaultShared);

  // Secret key that can be generated in the Azure portal to enable authentication of a
  // specific application (in this case our Web API) against an Azure AD tenant.
  var applicationKey = ...;

  // Root URL for Azure AD Graph API.
  var azureGraphApiUrl = "https://graph.windows.net";
  var graphApiServiceRootUrl = new Uri(new Uri(azureGraphApiUrl), tenantId);

  // Add bearer token authentication middleware.
  app.UseWindowsAzureActiveDirectoryBearerAuthentication(
    new WindowsAzureActiveDirectoryBearerAuthenticationOptions
    {
      // The id of the client application that must be registered in Azure AD.
      TokenValidationParameters = new TokenValidationParameters { ValidAudience = clientId },
      // Our Azure AD tenant (e.g.: contoso.onmicrosoft.com).
      Tenant = tenant,
      Provider = new OAuthBearerAuthenticationProvider
      {
        // This is where the magic happens. In this handler we can perform additional
        // validations against the authenticated principal or modify the principal.
        OnValidateIdentity = async context =>
        {
          try
          {
            // Retrieve user JWT token from request.
            var authorizationHeader = context.Request.Headers["Authorization"].First();
            var userJwtToken = authorizationHeader.Substring("Bearer ".Length).Trim();

            // Get current user identity from authentication ticket.
            var authenticationTicket = context.Ticket;
            var identity = authenticationTicket.Identity;

            // Credential representing the current user. We need this to request a token
            // that allows our application access to the Azure Graph API.
            var userUpnClaim = identity.FindFirst(ClaimTypes.Upn);
            var userName = userUpnClaim == null
              ? identity.FindFirst(ClaimTypes.Email).Value
              : userUpnClaim.Value;
            var userAssertion = new UserAssertion(
              userJwtToken, "urn:ietf:params:oauth:grant-type:jwt-bearer", userName);

            // Credential representing our client application in Azure AD.
            var clientCredential = new ClientCredential(clientId, applicationKey);

            // Get a token on behalf of the current user that lets Azure AD Graph API access
            // our Azure AD tenant.
            var authenticationResult = await authenticationContext.AcquireTokenAsync(
              azureGraphApiUrl, clientCredential, userAssertion).ConfigureAwait(false);

            // Create Graph API client and give it the acquired token.
            var activeDirectoryClient = new ActiveDirectoryClient(
              graphApiServiceRootUrl, () => Task.FromResult(authenticationResult.AccessToken));

            // Get current user groups.
            var pagedUserGroups =
              await activeDirectoryClient.Me.MemberOf.ExecuteAsync().ConfigureAwait(false);
            do
            {
              // Collect groups and add them as role claims to our current principal.
              var directoryObjects = pagedUserGroups.CurrentPage.ToList();
              foreach (var directoryObject in directoryObjects)
              {
                var group = directoryObject as Group;
                if (group != null)
                {
                  // Add ObjectId of group to current identity as role claim.
                  identity.AddClaim(new Claim(identity.RoleClaimType, group.ObjectId));
                }
              }
              pagedUserGroups = await pagedUserGroups.GetNextPageAsync().ConfigureAwait(false);
            } while (pagedUserGroups != null);
          }
          catch (Exception e)
          {
            throw;
          }
        }
      }
    });
}

Quite a lot of code (and comments) but the flow should be rather easy to follow:

  1. First we extract the token that ADAL JS gave us from the HTTP request.
  2. Using this token and another uniquely identifying characteristic of the user2 we create a UserAssertion that represents the current user.
  3. With the user assertion and a credential that represents our registered application in Azure AD we ask the ADAL AuthenticationContext for a token that gives our application access to the Azure Graph API on behalf of the current user.
  4. With this token, we use the ActiveDirectoryClient class from the Graph API library to obtain information on the current user. You might wonder how this client knows who the ‘current user’ is. This is determined by the token we provided: remember we asked for a token on-behalf-of the current user. An additional advantage is that we only need minimal access rights for our application: a user should be able to read his own groups.
  5. The groups the user is a member of are added as role claims to the current principal.
Access rights for the Graph API

The Graph API is an external application that we want to use from our own application. We need to configure the permissions our application requires from the Graph API to be able to retrieve the necessary information. Only two delegated permissions are needed:

Delegated permissions

Notes
  1. The Azure AD Graph API is being replaced by Microsoft Graph. However, this is still very much beta so I chose not to use it (yet).
  2. The UserAssertion class also has a constructor that accepts just a token and no other information that could uniquely identify a user. Using this constructor causes a serious security issue with the TokenCache.DefaultShared that we use. Tokens that should be different because we obtained them via a different user assertion, are regarded as equal by the cache. This may cause a cached token from one user to be used for another user.

How to differentiate in the highly competitive Service Provider market – Part 3 (Final)

This is the final part of our three part blogseries on our analysis of the service provider market. If you stuck with us so far, you have read our viewpoints on ‘building a brand’ and ‘controlling costs’ in Part 1 and Part 2. This final part is all about the ‘ability to adapt’:

3Areas

Ability to adapt

The IT market is being overrun with new, disruptive technologies. Containers, cloud native apps, internet of things, machine learning are here already or looming on the horizon. If service providers are unable to adapt and use these new technologies to their advantage, chances are that it will be extremely difficult to be and stay competitive. There couldn’t be a more fitting quote to underline the importance of adaptability than Charles Darwins famous quote:

It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.

Again, three pivotal factors that will determine a service providers ability to adapt to change:

  1. Skilled people
    We are in the so called ‘knowledge worker’ era and this couldn’t be more evident than in the IT industry. At ITQ we strongly believe in knowledge and we invest a great deal in training, development and growth. To be successful in the IT industry in general this is critical for success and even more so in a highly competitive market as the service provider market. The people will make the difference in the end. Where everyone is talking about the commoditization of IT infrastructures we at ITQ honestly believe that this is oversimplified lots of times. Software Defined Data Centers can be extremely complex to design and implement. It’s all about perspective. For the business and cloud consumers, IT infrastructure should really be just a simple as electricity. It should just work and you should be able to ‘just plug in’ or ‘order’. But from the perspective of the engineers and architects that build and design the underlying infrastructures we see a very complex landscape. Does anyone actually believe that a power plant and the power grid that delivers the electricity is ‘commoditized’, ‘simple’ and ‘requires no special skills’ to design, implement or operate? The people and the skills they have will make the difference… believe us!
  2. Tsunami of new technologies
    Technology in the IT industry developed insanely fast the last decade, and the tsunami of new technologies is not even close to being finished. For the new ITQ Cloud Native practice we are doing a lot of evangelism for cloud native apps and technologies. This goes even beyond containerization. This is about cloud native infrastructures such as VMware’s Photon Platform, Pivotal’s Cloud Foundry and so on. Things are moving insanely fast. The challenge for service providers is to make the right choices which new technologies to adapt and to build new services upon. This is a big point to differentiate from other cloud providers and to gain a competitive advantage. But jumping on the bandwagon early can also be risky. Cutting edge technology that is believed to take over the world today can be a ‘dog’ technology tomorrow. What if Google for example decides to pull the plug on a PaaS service that was used by a service provider to deliver a certain add-on service? What happens to all the data stored in that service? SLA’s can be very sketchy on data retrieval after a service is terminated. CloudQuadrants (an initiative and collaboration between Weolcan and Arthur’s Legal) did some excellent research on the maturity level of cloud provider SLA’s. The management summary can be downloaded here. The results are quite shocking!
  3. DevOps
    The IT industry is slowly realizing that the proven separation of duties in different teams no longer holds up. Developers need to know infrastructure and infrastructure engineers need to know how to leverage APIs to perform automation and orchestration. A prime example is AWS. Everything in AWS is API driven. If Amazon delivers a new service, they often only release the API at first before they even release a GUI. VMware’s vCloud Air and the vCloud Suite are also very heavy on APIs. Of course DevOps is more than just programming your infrastructure. It has everything to do with company culture and company mindset. Spotify Labs have released two (1 and 2) awesome videos on how Engineering teams at Spotify work together and how their company culture is evolved. Highly inspirational!

Despite the heavy competition and the undeniable challenges that service providers face, these are very exciting times. Market analysts are forecasting insane growth numbers for cloud computing expenditures. Hybrid Cloud computing holds a prominent place in Gartner’s Hype Cycle for Emerging Technologies, 2015 and the IT market is almost unanimous in naming Hybrid Cloud the desired end-state. At ITQ we can really tell from the discussions with our enterprise customers that cloud computing and especially hybrid cloud is gaining traction and the topic is on almost every IT agenda for the coming years. If service providers find a way to be unique, really differentiate and be competitive golden times can lie ahead!

We hope this blog series was informative to you. As we have said before, this blog series is nothing more than a respectful attempt to write down the developments we are seeing in the service provider market. They are our viewpoints, from our perspective, so please feel free to leave your comments or contact us for a deeper discussion on cloud computing related topics.

Thank you for reading!

Requesting Let’s encrypt SSL certificates for NSX – the automated way

Let’s encrypt is a very promising new initiative aimed at becoming a new standard how SSL certificates are provided. No more hassle with having to manually send in certificate requests, remembering you forgot to forward your postmaster mail address, getting the certificate out of the mail and manually converting it and finally getting it imported. As per their website: “Let’s Encrypt is a new Certificate Authority: It’s free, automated, and open”. And they don’t disappoint.

Currently letsencrypt is built around an open API and uses online validation through your existing webserver to ensure that you are the owner of the domain. Apache and nginx are supported, and more servers are being added as the product matures. In addition, additional validation options exist such as building your own webroot, using standalone validation or manual validation, and through the plugin system it is possible to write your own plugins for your own product. So, all in all, very promising.

The reason for looking into letsencrypt is that I help manage a non-profit public cloud provider, and a bit over a year ago we made the decision to switch to NSX for our networking needs. We also use the NSX edge as a loadbalancer for our website, the vSphere web client and some other services such as irc and a shell host. Recently, our certificate expired and – because we all love cool new technologies – we all agreed to consider letsencrypt. However, since letsencrypt doesn’t support NSX out of the box and we didn’t want to play around with manual domain validation, some custom work needed to be done.

Setting up the environment

To start, we’ll need a server that will run the letsencrypt application. In our case we decided to install it on our webserver, but you can also create a dedicated server in case you’re worried about having your certificates and private keys on a server that’s also running web services. It ultimately doesn’t matter where letsencrypt is ran, as long as it can serve static files through a webserver.

The installation of letsencrypt (found on http://letsencrypt.readthedocs.org/en/latest/using.html#installation) on Linux is as simple as running the following commands:

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt

Next up, we’ll need to set up a webserver that will respond to all domains you want to request certificates for. In our case we use apache with the following vhost config:

<VirtualHost *:80>
 ServerAdmin ops@klauwd.com
 DocumentRoot /var/www/klauwd.com
 ServerName www.klauwd.com
 ServerAlias klauwd.com
 ServerAlias irc.klauwd.com
 ServerAlias vcsa.int.klauwd.com
ErrorLog ${APACHE_LOG_DIR}/error.log
 CustomLog ${APACHE_LOG_DIR}/access.log combined
 Alias /.well-known/acme-challenge/ "/var/www/letsencrypt/.well-known/acme-challenge/"
</VirtualHost>

The important part is the Alias for /.well-known to the location of your letsencrypt validation directory. Of course, you could also publish these in the same directory as your webserver, but this allows you to use a single location for validation even if you have multiple virtualhosts.

Next up, we want to configure the NSX load balancer. First off, ensure that even if you just use HTTPS for your sites, HTTP is also served since the online validation takes place over HTTP.

In our NSX loadbalancer we’ll need to create a pool for your letsencrypt server. If you reuse your existing webserver you might not need to, but lets assume that you have a dedicated machine.

Creating a NSX pool

We’ve created a “services” backend which is our server that manages all internal services, including letsencrypt.

Next up, for our loadbalancer we’ll need to create some application rules:
Screen Shot 2016-01-13 at 09.59.26

 

As you can see, we have two rules. Now technically, only one rule would be required, but for completeness’ sake i wanted to show the first rule as well.

Let’s start off with the last rule first:

acl is_letsencrypt url_sub -i acme-challenge
use_backend services if is_letsencrypt

If you’re familiar with NSX loadbalancig and/or HAProxy syntax, this might already be familiar for you, but for those who aren’t, lets go through the implication of this rule.

acl is_letsencrypt creates an access list which checks (through the command url_sub) if acme-challenge is part of the requested url, while -i allows us to check case-insentitive. So if the url in our example would be www.domain.tld/.well-known/acme-challenge/ the acl would match. Now i know this could likely be a bit more strict, but due to time constraints i did not get around to doing this. Modify this rule at your liking.

use_backend services if is_letsencrypt means that if the acl is_letsencrypt matched in the request, the backend pool to be used should be the one named services instead of the preconfigured one. This means that – regardless of the original destination for the traffic – the request will be sent to our letsencrypt machine instead, but only if the url contains acme-challenge.

Now on to the next rule:

acl is_letsencrypt url_sub -i acme-challege
redirect scheme https code 301 if !is_letsencrypt !{ ssl_fc }

as we obviously love using our free new certificates as much as possible, all traffic coming in on http is redirected to the same url on https. However, this is not possible for letsencrypt which will use http for the online validation.

Again, we see the acl is_letsencrypt described in the previous rule. What we’ve added now is a redirect rule. Breaking down this rule, what is says is to change http to https with a HTTP 301 (Moved permanently) status code, but only if the protocol is not ssl encrypted already (The exclamation mark means that the rule should negate the condition). In addition, we’ve added a condition that it should only redirect to https if is_letsencrypt is not true.

This allows us to redirect all traffic to https, except when the url contains our letsencrypt certificate url.

Now that this is complete, we can continue on to the actual certificate requests.

Requesting the certificates

For all the details regarding certificate requests, I would like to redirect you to http://letsencrypt.readthedocs.org/en/latest/ for all the fine documentation that has been provided by the let’s encrypt team. They can explain in all details what each option does, how to customize your requests and what other possibilities you have.

For our use case, we want to request a single SAN certificate which contains all our domain names, so after logging in to our services machine, the following command is ran in the directory where you installed letsencrypt:

./letsencrypt-auto certonly --webroot --email ops@klauwd.com --agree-tos -w /var/www/letsencrypt/ -d klauwd.com -d vcsa.int.klauwd.com -d www.klauwd.com -d irc.klauwd.com

What this does is request a certificate only, not install it (certonly), use the webroot plugin (–webroot), set our email address for support and recovery, agree to the terms of service, set our location for the challenge to /var/www/letsencrypt, and set our domains to all the domains we want to provide on the command line. Ensure that while you’re still testing the product to use the –test-cert option which will provide you with a limited test certificate, since the amount of requests per domain is limited.

Alternatively you can set these options in a configuration file so you won’t have to provide them on each run, which is what we’ll use in the cronjob to automate this process. For more information, see http://letsencrypt.readthedocs.org/en/latest/using.html#configuration-file.

Now we should have our certificates generated in /etc/letsencrypt/live/<domain>/ where <domain> is the first domain option you entered on the request. In this directory, you’ll find a number of files:

  • cert.pem – This is the certificate in PEM format.
  • chain.pem – This is the root CA’s chain in PEM format.
  • fullchain.pem – This is the full chain of your root CA’s chain and your certificate, again in pem format.
  • privateKey.pem – This is the private key for your certificate.

Now before we can use these, there is one last thing we must do. Since NSX doesn’t support the private key format provided by letsencrypt, we need to convert this to a RSA private key first. For this, use the following command line:

openssl rsa -in privateKey.key -check

This provides you with the RSA enabled private key on the standard out, which can be copied for importing into NSX.

Now we’ll open the NSX edge configuration again and go to Settings -> Certificates.

Screen Shot 2016-01-13 at 10.23.24

Click the +icon and add a CA certificate. In the following screen, paste the content of your chain.pem file. You should see the Let’s encrypt Authority X1 being added to your certificate store.

Next, click the + icon again and select “Certificate”.Screen Shot 2016-01-13 at 10.24.58

This time, in the Certificate Contents, paste the content of your fullchain.pem. Ensure that you include the —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—– lines, and don’t leave any whitespace before or after those.

Next, we need the private key. Remember the original key we converted to RSA? Copy the output of that command into the private key field.

Since the key is not password-encrypted, leave those fields blank. If you want to, you can add a description as well.

Now the only thing left to do is to change our load balancer application profile. in the NSX Edge, go to Load Balancer -> Application Profiles and edit the HTTPS application profile you should already have.

Screen Shot 2016-01-13 at 10.28.42

At the bottom, select “Virtual Server Certificates”. Check “Configure Service Certificate” and select the certificate you generated. Next, select “CA Certificates” and check the Let’s Encrypt Authority X1 certificate.

After saving the settings, your website should now be fully encrypted and have a valid certificate through the use of the Let’s encrypt Public Key Infrastructure. And all of that without messing around with legacy systems like email validation or having to pay through the nose for multidomain certificates.

Now, since letsencrypt certificates are valid for 90 days only, our next step will be to run the request commands through vrealize Orchestrator or a cron job, and use the NSX api to automatically configure the newly requested certificate.

Happy Encrypting!

How to differentiate in the highly competitive Service Provider market – Part 2

In Part 1 of this blog series we focused on the competitiveness of the Service Provider market, we identified three key focus areas which ITQ thinks are critical for success and we discussed how to build a brand:

3Areas

In this part of the blog series on the Service Provider market, we will elaborate on the challenge of controlling costs.

Controlling costs

Cost control is a key area for service providers. In order to be competitive, service providers need to make strategic choices about hardware platforms and architectures. These have major impact on capital investments. Choices range from converged infrastructures such as VCE VBlocks and NetApp FlexPods that are pre-integrated and pre-tested, to completely integrated hyperconverged boxes, to fully custom built white labeled solutions. Evidently, investments vary greatly between these choices and to be honest, we’ve seen successes and failures with all choices. There is no golden rule for success unfortunately. Consider the following factors:

  1. Capacity planning
    The ability to do proactive capacity planning can really help in choosing the right moment to invest in your cloud infrastructure. Capacity planning, of course, also relates to being able to forecast your customer growth. At which point do you expand your cloud infrastructure with additional resources? There are a lot of solutions available in the market. As VMware partner we work a lot with vRealize Operations Manager with our customers. This product has evolved into a very trustworthy solution for cloud operations in general.
  2. Capital intensive
    The commoditization of IT infrastructure does not necessarily mean IT infrastructure is cheap. If you have made the investment before, we don’t have to explain how capital intensive a new premium converged infrastructure rack is. If you are a new service provider with none or little customers, this can be a real challenge. Service providers have to choose the right moment to invest and the chosen solution should align with the company’s vision, culture, skills and beliefs. If you have a team of highly skilled and very creative engineers with tons of experience in building SDDC’s and hybrid clouds, it might not be the right choice to invest in a more rigid converged infrastructure. These sort of teams accelerate when they can design stuff themselves. On the other side of the scale, converged infrastructures can be a great solution for teams focusing more on operational excellence and so on. Our advice is to choose carefully and consult the market as much as you need in order to be sure.
  3. Cloud economics (pay-per-use and how to bill)
    How to bill your customers? With cloud computing there is a strong belief that you should pay for what you use. If a workload is powered down, cost should go down. While this can be a great model, we see that lots of customers are struggling to get a grip on their cloud expenditures. Are you familiar with the AWS Simple Monthly Calculator? This is an extremely powerful tool to forecast your monthly bill, but if this is Amazon’s perception of ‘simple’ … ??? If customers have their cloud governance under control and the applications are built to run on a public cloud, pay-per-use can be great and cost-effective. You scale down on off-peak hours, you scale out dynamically based on demand. Great! If customers plan to run enterprise-grade 3-tier applications 24×7 the pay per use model might not be best and most definitely will NOT be cheaper. That’s why VMware primarily decided to choose for subscription based cloud offerings in vCloud Air. Customers buy a certain, fixed amount of resources for a fixed price. Costs are predictable and capacity planning is relatively easy. So how should service providers cope with these varying customer’s demands? Key in this challenge is to have a flexible and scalable billing engine so you have the freedom of choice. Service providers that are targeting new, startup type of companies are probably better off in offering a pay per use pricing model, but if more traditional enterprise customers are being targeted, a subscription model might be best to offer. As with all things in the cloud market, billing should be easy, automated, directly available for customers when needed and fast/responsive.
    There are a lot of commercial solutions for cloud billing. There are cloud billing solutions that are an integral part of a cloud management platform (ie AirVM’s AirSembly) and there are dedicated products that run either on-premises and/or in the cloud as SaaS. Two examples of dedicated cloud billing solutions are US based Cloud Cruiser or The Netherlands based Inter8’s CloudBilling.

This concludes the second part of this blog series. We hope you can relate to our viewpoints and that you are curious about the final part of this blog. This final part will be all about the ability to adapt. Stay tuned!

My experience with Veeam and the Veeam VMCE 8 Exam

About two to three years ago a well respected colleague introduced to me the VEEAM product. I was still setting up my LAB and was in the same time in search of a fast, small and usable backup product. He convinced me of at least trying it. Back then, I believe it was version 7, it was indeed fast and very usable in my LAB.

It was setup very easily, and it worked….just what I needed. Still, only making backups back then and never even tried to restore a VM or a portion of it. For a while I forgot about it and never looked at it, until the moment arrived (of course) that an important VM of my lab stopped working. Yeah, I needed to restore it, but how…I had a backup so, I restored it, and it worked. Of course as it should be, but I can remember the easy, fast and reliable way how it got accomplished. So my love for the product was there….

Ever since it has been an essential part of my lab, and not even for the daily backups, but I also started to leverage the huge amount of possibilities for testing the VM’s (SureBackup) and testing new updates and versions before bringing them into production (Sandboxing/Virtual Lab).

A couple of months ago I attended the 3-day VEEAM course to learn even more about the product and to get even more educated on the high level and advanced opportunities the product is offering. VEEAM Backup and Replication v8 in a nuttshell delivers backup, recovery and replication for VMware and Hyper-V. This #1 VM backup solution helps organizations meet RPO’s and RTO’s, save time, eliminate risks and dramatically reduce capital and operational costs. The VEEAM suite combines Backup and Replication with advanced monitoring and reporting capabilities to help organizations of all sizes protect virtualization, increase administrator productivity and mitigate daily management risks.

Keeping your business, and of course your own LAB, up and running at all times is for some of us critical. Even business today require 24/7 non-stop access to data, the exploding data growth needs to be efficiently managed, and now there is even no or little tolerance for downtime and data loss. With VEEAM’s Availability Suite, VEEAM has created a new solution category and thus a new market: Availability for the Modern Data Center they call it, to enable the Always-On Business. After the 3-day VEEAM course I immediately tried as many functions as possible, testing, looking at it, measuring performance, tried restores, setting up
the virtual-lab, all nice thing you can do with the product, and its worth it:

– High-Speed Recovery: Rapid recovery of what you want, the way you want it.
– Data Loss Avoidance: Near-continuous data protection and streamlined disaster recovery.
– Verified Protection: Guaranteed recovery of every file, application or virtual server, every time.
– Leveraged Data: Low-risk deployment with a production-like environment.
– Complete Visibility: Pro active monitoring and alerting of issues before operational impact.

Then, of course it was time to certify…veeam certified engineer logo

The actual exam consists of 50 randomized questions from each of the course modules. In my opinion if you are new to the product, the official classroom textbook, hand-out and labs are a must have. Unless you are really familiair with the subject matter you can sit the exam. But, to be officially certified you must sit the 3-day classroom or you must be a North America citizen, then completing some VODL video’s could be enough.

The balance across modules is not necessary equally balanced (you may have more on deployment than on product overview for example), so be sure that you know the product really well and all of its features. Most of the questions are multiple choice and some are just True or False. I had a few questions that would need to describe what is going on in the picture shown. I had also some difficult questions that look a bit troubleshoot like.

If you sit at the exam, read the question well, most likely read it 2 or 3 times to be sure what they ask. Time is not a constraint (I’ve finished the exam in less than 45 minutes). Questions are quite short and simple sentenced. If you are not sure right away you can flag the question for review later, and you can go backwards and forward in the exam. After you are sure to end the exam the result is shown immediately and you can start a short survey about the exam. You can check back at Pearson VUE if it all went well.

Capture

There are some nice practice exams around on the web, which are in my opinion more difficult than the actual exam. Rasmus Haslund has a very nice practice exam online at: https://www.perfectcloud.org/practice-exams/practice-exam-veeam-certified-engineer-v8/

Currently VEEAM Availability Suite V9 is in the making and will hit the market hopefully very soon. So most likely I will keep you informed…

Arie-Jan Bodde

How to differentiate in the highly competitive Service Provider market – Part 1

As an independent consultancy company focusing on VMware technology, we work closely with numerous service providers in the vCloud Air Network program. From this perspective we see most service providers are facing more or less the same challenges or are at a pivotal turning point on a strategic or tactical level. This three part blog series will provide ITQ’s perspective on the service provider market.

No doubt everybody recognizes the extreme internal market competition service providers are facing right now. On one hand they have to battle with giants like Amazon, Microsoft, Google and VMware and on the other hand new challengers, using disruptive new technology, are already looming on the horizon. Aside from these huge challenges, service providers also have to deal with day to day and operational challenges to stay competitive. The graphic below is an unstructured collection of keywords we have picked up while working closely with numerous service providers over a long period of time. Some examples:

  • We are hired primarily because of the highly skilled people we employ. We can relate to the difficulty of expanding teams with skilled people who are able to cope with all the new technologies storming our way.
  • For cloud consumers, price is extremely important (while cutting costs is one of the biggest reasons for companies to adopt cloud computing, cutting costs should most definitely NOT be a primary business driver, but more on that in a future blog)  so to be competitive you continuously have to benchmark your prices. Large public cloud providers have the advantage of ‘economies of scale’. This makes it easier for them to drop prices as they grow and expand their business.
  • Delivering highly scalable cloud services in an instant (speed to market) means you continuously have to make high, upfront capital investments in your cloud infrastructure, while customers want to pay relative small amounts on a monthly, pay per use basis.

Words

In their current, unstructured form these keywords are just loose words. Some will be of interest to you, some won’t. Some will relate to others, some won’t. In order to ‘enrich’ these random words so they can have a strategic or tactical purpose, it is important to group them in three key focus areas which ITQ thinks are crucial in order to be competitive in the service provider market:

3Areas

Of course this is not an absolute truth or an exact science, but merely a humble opinion and a view we at ITQ share based on working ‘in the field’ with both (potential) cloud consumers and cloud service providers.

Building a brand
For service providers in general, and maybe even more so for service providers in the vCloud Air Network program, it is very difficult to differentiate from the crowd. How do you stand out? Everybody is primarily selling IaaS and maybe some PaaS services. Dutch strategist and and author drs. Wouter de Vries jr. wrote a book on service marketing called “Blauwe Bananen, Vierkante Meloenen” (ie “Blue Bananas and Squared Melons”). He gives the perfect example of how to strategically market yellow bananas. Bananas are basically the same everywhere on the world. Some might be bigger than others, some might taste slightly different, but in the end they are all the same. So how can you be competitive in selling bananas? Competing on price is basically all you can do. That’s not how you want to compete in a high-end services market such as the service provider market. In the past, some service providers made strategic choices on how to differentiate: offering high-performance computing, guaranteeing extreme high uptimes and SLA’s, etc. But to be honest, when we look at the market right now, the majority of the service providers deliver more or less the same services. Of course the big players such as Amazon, Azure, Google and VMware have been pretty able to differentiate themselves, but how do smaller, independent service providers prevent themselves from being yellow bananas? With new disruptive technologies changing the market at an insane pace, everything we observe at this moment is subject to change at any time, but right now we see three major areas in which service providers can differentiate and be competitive.

  1. Self-service portal
    A self-service portal is your ‘directly connected interface’ to your customers. It is the first thing end users see when logging on to your service. It is your billboard, your showroom. We believe that a self-service portal should be all about customer intimacy. The customer has to ‘feel’ right at home there and they should instantly recognize your brand. In our believe it is imperative to have a branded self-service portal for your customers and it should work flawless, be intuitive to the user, perform great, and be highly available. This is especially a delicate subject for service providers in the VMware vCloud Air Network program. Their service is built on vCloud Director and VMware decided to simply stop the development of the vCloud Director self-service portal a couple of years ago. VMware delivered a set of API’s and a service provider could purchase or build any portal it saw fit. Unfortunately, not a lot of service providers had the means to either build, or the desire to buy a separate service portal. A lot of service providers are therefore still running the native, not so slick vCloud Director portal. This did create a market for companies that build awesome portals and cloud management platforms. AirVM’s AirSembly for example offers service providers a very feature rich, attractive and highly customizable cloud management platform which can be fully branded. During the writing of this article, VMware thankfully announced that they have recontemplated this strategy and that the development of the vCloud Director User Interface will be restarted in 2016. VMware also named AirVM (and competitor OnApp) ‘Recommended Cloud Management Platform Partner for VMware vCloud Air Network Cloud Providers’. They will be working closely together with VMware on integration with vCloud Director.
  2. Speed of provisioning
    Customers want services delivered instantaneously and fully automated. They do not want to send in request forms or wait for manual authorization processes. This should all be fully automated and supported by the self-service portal.
  3. Time to market
    Time to market closely relates to speed of provisioning, but has more to do with being able to integrate and deliver new, 3rd party, solutions that customers are demanding. Open API’s can be utilized to integrate these solutions and play a major role in achieving a quick time to market for new solutions that enrich your service proposition. New technologies, like VMware NSX, can be used to deliver network and security services with the click of a button. Deep packet virus inspection can be an add-on service and simply a checkmark when a customer orders a virtual machine. On the other hand, Amazon has released hundreds of new cloud services over the last couple of years. It is insanely difficult for independent service providers to compete with that. Economies of scale…

The following blogpost in this series will focus on the second area ‘controlling costs’.

Cross-Cloud vMotion

Over the years, many technological developments have revolutionized the market. One of these developments was vMotion: being able to perform a fully automated and live migration of a virtual server to another physical server without any downtime. Nowadays, this is considered a common technology in all data centers, but at the time this was ground-breaking and revolutionary. vMotion truly shook the market, gaining VMware the leading position in the server virtualization market. I think everyone who lived in the world of physical, dedicated servers, clearly remembers when they saw their first vMotion and how very cool this was.

Over the years, VMware has made many impressive improvements and added additional functionality to vMotion, but not before VMworld 2015 I regained that “WOW !!!” feeling after seeing a vMotion. This time it was a vMotion of a virtual server in a local vSphere data center to a public cloud instance in vCloud Air: Cross-Cloud vMotion!Blog Jeffrey 06

As part of the technology preview of ‘Project SkySkraper’ (during the keynote at VMworld 2015) VMware showcased a vMotion of a virtual server to vCloud Air without any downtime. This revolutionary functionality will be made available in a future release of VMware vCloud Air Hybrid Cloud Manager to vCloud Air customers. How simple can migrating to the cloud be?

Of course, if your application is already suitable to be spread across multiple physical locations and is able to run full stateless, moving a full virtual server to and from vCloud Air will not be interesting. The reality is that few business applications are ‘cloud native’. Many data centers are still full of “traditional” three-tier applications with a presentation, application/middleware and data layer. These applications are a long mile away from migration between corporate data centers and public clouds.

That’s why I expect that this technology will provide a new revolution in the market and will gain VMware the position of most appropriate public cloud IaaS provider for VMware customers. No, there is no typo in the last sentence. I really mean most appropriate cloud provider for VMware customers! vCloud Air is primarily intended for customers who are already using VMware technology in their corporate data centers. VMware doesn’t focus on customers that have fully focussed on Hyper-V, XenServer and/or KVM. Of course vCloud Air can function as a pool for IaaS resources, thus complementing a Hyper-V data center, but the unique power of vCloud Air – providing a ‘seamless’ extension of the local data center – will only be fully utilized when the local data center also uses VMware technology.

I can’t wait until Cross-Cloud vMotion will be made available to the general public!

VMware vRealize Automation and the SDDC

At VMworld Barcelona, VMware has announced vRealize Automation version 7. This version will be a big step forward fom the current 6.x version. The first improvement I would like to mention is the majorly simplified installation procedure. Before we dive into the details of the new features, first a short overview of vRA en why one actually needs such a management tool.

Software Defined

The foundation of every new IT environment is made up of at least three Software Defined components: Software Defined compute (a.k.a. virtualization), Software Defined Storage and Software Defined Networking. Software Defined simply means the functionality has been fully implemented in software and can be deployed on general purpose hardware. This way, Software Defined Networking is eliminating the need for physical routers and Software Defined Storage is making separate storage solutions redundant.

But Software Defined also means we can define the functioning of these components via software. No human interaction is required with these systems for their configuration. Instead, there is a central component through which administrators can define who gets to use how much of what. Then, a datacenter end user can request and receive services from the SDDC by means of a service catalog.

VMware vRealize is such a central component, which merges the three software defined components of a datacenter to a Software Defined Data Center. vRA offers Data Center users a clear, organized service catalog and administrators the ability to define policies and allocate permissions to users. This way, end users gain autonomy while the IT department stays in control, making sure security and compliance requirements and standards are met.

VMware vRealize Automation 7

As mentioned earlier, VMware announced a new version of vRA during the keynote of VMworld Europe 2015. This release brings to the table a more mature integration with other VMware products, like the Software Defined Networking product NSX. The new blueprint designer allows us to compile services in a visio-like manner, without differentiating between types of blueprints. A service might simply be a single Linux machine, but also a complete multi-tier application including networks, firewall rules and load balancers. This service may then be offered as a whole in the service catalog. The moment an end user requests the service, all components will be created in the SDDC in real-time.

vRA 7 also offers a lot more possibilities for so-called extensibility. This means it is now easier to integrate vRA with external systems, for example to let approval of requests go through ServiceNow or to request IP addresses using an IPAM system. This extensibility is separate from the blueprints and is managed by the SDDC manager, not the blueprints designer.

Lastly, the functionality formerly known as AppServices will now be fully integrated into vRA 7. This means vRA will be able to install software onto machines that are provisioned by vRA. This functionality is integrated with the blueprint designer; supplying a server with tomcat, for example, will be as easy as dragging the tomcat service over the machine in the blueprint designer.

End User Computing @ VMworld Barcelona

This week is arguably the best week of the year technology-wise, for yesterday VMworld Barcelona kicked off.
VMware’s End User Computing portfolio is currently undergoing a true revolution and End User Computing is present at the Fira Gran Via in a big way. A number of new technologies is being showcased at VMworld this year.

NSX meets Horizon

More and more companies opt to bring security of the IT infrastructure to a higher level. However, IT infrastructure security goes way beyond the network. Trends like Bring Yor Own Device and Unified Workspaces force us to provide an IT environment that lives up to the current, strict security requirements. VMware has cleverly played into the market demand for security on an end user level. NSX, a networking virtualization technology from VMware, lets us segment individual virtual desktops to allow only necessary traffic, in a scalable way. This is what we call micro-segmentation. In Virtual Desktop Infrastructure (VDI), scalability is essential. That’s why a technology like NSX needs scalability to be able to be deployed in VDI.

Project Enzo

Earlier this year, Project Enzo was revealed: a new method for managing desktops. Project Enzo is a combination of a number of products that will simplify the deployment of VDI and application delivery. First off, there is a new user interface, which is similar to the interfaces of both AirWatch and EVO:rail. This intuitive interface is HTML5 based, which makes it fast and easy to use in different browsers. Poject Enzo makes use of several types of Smartnodes. These are essentially sources from where a desktop can be deployed, for instance EVO:rail, Horizon AIR, Horizon DaaS and the traditional Horizon View environments.
Additionally, Project Enzo can use the newly introduced vSphere 6 technology InstantClone, allowing us to make a new active clone of a running VM, which runs exclusively in memory and can be created in a number of seconds. Add to this the automated provisioning of AppVolumes Appstacks and the result is an extremely scalable, dynamic VDI environment.

Application Delivery

When viewing the application landscape of (especially) enterprise companies, we see a shift in the way applications are delivered to end users. Traditionally, we are used to MSI installation files delivered with SCCM. Products like Identity Manager and ThinApp and the acquisition of AirWatch, Cloud Volumes and Immidio have positioned VMware to be market leader in the field of Application Delivery. This year at VMworld, there is a number of sessions dedicated to selecting the best application delivery technology. Let’s take Office for an example. Why would we install this in a traditional way on a Windows operated device? And, when introducing Bring Your Own Device, would we then only allow the use of Windows laptops? We would not, of course.
Products like Identity Manager and Airwatch enable us to automate the deployment of a primarily native application for each mainstream device, for instance Microsoft Office for Windows and iOS devices, but also applications like Slack and Skype.

Project A2

At VMworld San Francisco, VMware has announced the new product Porject A2, a collaboration between AirWatch and AppVolumes (hence the A2). AirWatch is the Enterprise Mobility Management (EMM) system and AppVolumes the Rapid Application Delivery system by VMware. By combining these products and making use of Windows 10’s native EMM features, deployment of and migration to Windows 10 can be simplified severely. This week, several VMware End User Computing specialists are present at VMworld Barcelona to give Project A2 product demonstrations.

Boxer

During today’s keynote, Sanjay Poonen (general manager of End User Computing, VMware) announced the addition of Boxer to VMware’s EUC portfolio. Boxer is a mobile Inbox application which simplifies working with email on devices like an iPhone or Android Tablet. Boxer enables the use of the most popular email services, like Exchange, Google Mail, Yahoo and iCloud, from one simple interface.
The acquisition of Boxer suggests the AirWatch Inbox client will be gradually phased out, while its key features would be added to the Boxer application. The date of closing for the acquisition is yet unknown, but Boxer’s website already claims membership of the VMware portfolio.

vCloud Air key announcements at VMworld 2015

vCloud Air was introduced by VMware in 2013 by VMware – initially under the name vCloud Hybrid Service (vCHS) – as a public cloud IaaS (Infrastructure as a Service). After two years of development and improvement, VMware is, in my opinion, successful in putting their public cloud distinctively in the market. vCloud Air focuses primarily on customers who are already using VMware technology in their corporate data centers. In addition, VMware made the underlying infrastructure of vCloud Air (vSphere and vCloud Suite) high available. AWS, for example, has the vision you should solve high availability at the application layer, where VMware focuses on making the underlying infrastructure high available. Reality today shows that but a few company applications are able to properly deal with a failing infrastructure. If you are using VMware technology in your corporate data centers and you want to consolidate (parts of) your virtual infrastructure into a public cloud, then vCloud Air should be the first service to be considered.

Contrary to what was published in a recent (unconfirmed) rumor on a technology news website, VMware hugely invests in vCloud Air. At VMworld 2015, many new features and changes were announced for this service, all contributing to the realization of the VMware’s Hybrid Cloud vision. Below you’ll find, in more or less detail, the key announcements of vCloud Air at VMworld 2015:

VMware vCloud Air Disaster Recovery (DR) Services
Previously, the DR service of vCloud Air was only available on a ‘subscription’ base. This meant you purchased a fixed amount of ‘standby’ resources at a fixed price. The pricing model for DR is now replaced by a ‘pay-for-what-you-consume’ pricing model. A flat-rate fee per VM will be charged for the replication of the VM; additionally you only pay for the amount of compute resources you consume after a DR. This new pricing model makes starting with vCloud Air DR easier and cheaper for customers.

Further DR developments have lead VMware to announce a new SaaS that allows for automating and orchestrating DR plans in a simple way: VMware Site Recovery Manager Air. An important feature of SRM Air is support for failback. After a DR it is crucial to migrate back to your own datacenter in a simple and fast way (once it is available again). VMware has learned important lessons in this area. In the first release of vCloud Air DR (in combination with vSphere Replication) there was no native failback possible. To migrate back to your own data center a failovered VM had to be turned off and you had to perform an offline migration using the vCloud Connector. This made the service virtually useless for customers. Luckily VMware has realized this and the service now supports a full failback.

VMware vCloud Air Object StorageBlog Jeffrey 01
For storing multiple terabytes of unstructured data, vCloud Air customers can now use vCloud Air Object Storage. VMware offers the customer a choice between a transparent integration with the Google Cloud Platform public cloud service or landing on a private cloud based on EMC’s software defined storage platform, EMC ViPR.

The Google Cloud Platform offers customers a choice of three flavors Object Storage depending on performance and availability requirements:

  • Standard storage for best performance
  • Durable Reduced Availability Storage for a lesser availability
  • Nearline Storage for low-cost archivingBlog Jeffrey 02

EMC ViPR based Object Storage also has another two options to offer:

  • Standard Storage for a single-region solution with high durability of 11 nines.
  • Premium Storage for a geo-replicated solution across multiple regions with extreme durability of 13 nines.

The Google Cloud Platform option is available today. The EMC ViPR variant is currently in beta. More details will undoubtedly follow in the near future!

VMware vCloud Air SQL
VMware now offers DataBase as a Service (DBaaS) service with the introduction of vCloud Air SQL. With this service, customers can quickly and efficiently use scalable SQL databases with vCloud Air. Currently this service is available in a so-called Early Access Program.

VMware vCloud Air Advanced Networking Services
Advanced Networking Services has been aanounced a while ago by VMware as part of the ‘One cloud, any app, any device’ campaign in February 2015. This important expansion of vCloud Air has finally been made available to the general public. Advanced Networking Services in vCloud Air bring some key features of VMware’s network virtualization product NSX to vCloud Air. The most striking features are the ability to implement a ‘zero-trust security model’ due to micro-segmentation and the support of dynamic routing using standard industry protocols such as OSPF and BGP.

Blog Jeffrey 03Using micro-segmentation, firewall policies can be applied on the virtual NIC(‘s) of virtual servers. For example, VMs in a secure DMZ network can be in the same network segment without having layer-two access to each other. In many traditionally-built data centers, networks are divided into three layers: presentation-, application- and data-layer. Between these layers often firewalls are active to regulate traffic between them. Micro-segmentation can prevent various systems within the same network layer accessing each another.

Even more, Advanced Networking Services offer improvements to load balancing, SSL, VPN and can easily be scaled up to 200 virtual networks.

A final important fact: Advanced Networking Services are currently only available for Dedicated Cloud customers. These services are not available in the Virtual Private Cloud variants of vCloud Air.

VMware vCloud Air Hybrid Cloud Manager
Hybrid Cloud Manager is a new management product for vCloud Air. It provides a simple solution for managing vCloud Air resources from the vSphere Web Client and offers customers a unified management solution for both on-premises and cloud resources.

Blog Jeffrey 04Hybrid Cloud Manager also features a number of advanced network technologies allowing to extend networks to vCloud Air using a Layer 2 VPN Data Center Extension by pulling vCloud Air, thus providing a seamless extension of the local vSphere data center. Choices are Hybrid Networking Standard and Premium: the main differences are the number of vCenter connections (1 vs. 3), and the available bandwidth (100Mpbs vs. 1Gbps).

Hybrid Cloud Manager also offers advanced cloud migration technologies. WAN acceleration enables efficient replication of workloads to and from vCloud Air. During replication a virtual server can be active; a shutdown is only needed when a ‘switchover’ occurs. This is a huge improvement over the completely offline migration offered by vCloud Connector.

Technology Preview – Project SkyScraperBlog Jeffrey 05
‘Project SkyScraper’ technology will ensure in the near future that virtual servers can be migrated without downtime to and from vCloud Air: a Cross-Cloud vMotion! More on this in a next blog!

Finally, I would like to mention the ‘Project SkyScraper’ feature called Content Sync. This feature allows customers to subscribe vCloud Air to a vSphere Content Library thus enabling you to seamless sync things like templates, vApps and ISOs between the local data center and vCloud Air.

I can only conclude that, with all these new features and enhancements, VMware has full focus on vCloud Air. According to VMware’s vision the Hybrid Cloud is the ideal end state and VMware vCloud Air plays a huge role in realizing that vision. I certainly can’t wait for these nice improvements so I can get started with them at our clients!

Google Cloud Services

As you may know, VMware and Google are in an extensive partnership. Lot’s of Google Cloud Platform services are being leveraged through vCloud Air. At VMworld 2015 Europe a couple of new services were announced ‘generally available’. Google Global DNS, Google Cloud Datastore and Monitoring Insights on VMware vCloud Air.

vCloud Director 8.0

Whilst not really being a vCloud Air announcement, I feel I should also mention the release of vCloud Director 8.0. vCD is VMware’s cloud platform offering for service providers. vCloud Air is built on top of it and vCloud Air Network Partners are also leveraging it to deliver cloud services.