ITQ Roundtable SDDC – The journey called Automation

On Thursday 17th of November 2016 the Roundtable ‘The journey called Automation’ will be held at ITQ’s office in Wijk aan Zee. Join us for a lively networking and educational event. Gain new insights into ‘Software-Defined Data Center’ and ‘Automation’. Interested how this could immediately have an impact on your organization?

‘Software-Defined Data Center’ and ‘Automation’ are not new in the IT language. Are you however familiar how this could be a service that virtualizes, delivers and benefits your entire organization? Has your journey already started? Or are you taking the first steps? What kind of challenges do you face and will you during your journey.

During this Roundtable our most experienced consultants will inform you about the latest developments
regarding Software-Defined Data Center in relation to automation. We will share our journey with you. And will help you to schedule your road ahead. We would be pleased if you could join us and hopefully be active participants in our discussions.

Since space is limited, please confirm your attendance and register at events@itq.nl.

roundtable

ITQ – IT Transformation Services

“From time to time you have to take a step back and look at the broader picture.”

We at ITQ highly rate exploring new technology. We thrive to think of ways to innovate. And consequently use the most advanced technology to ultimately help our customers. These innovations are often performed from a ‘technology push’ perspective and are therefore usually setup, governed and managed in the operational layer of an organization. Another approach, we think, is to step back from time to time and look at the broader picture. This will result in a more successful implementation.aim

The Amsterdam Information Management (AIM) or 9 Cells model, developed by Rik Maes displays this perfectly. This model depicts three layers of an organization: strategic, tactical/architectural and operational. The three pillars are business, information (management), and technology. This model provides structure for translating business strategy – via either architecture or information management – to the operational layers of an organization.

In the context of this article, the rationale behind this model is that projects initiated solely at the operational level, without a clear sense of why from a strategic perspective, and without structure and boundaries provided by architecture are extremely difficult to deliver with success. The objectives often become fuzzy and the project will become detached from the organization.

Sometimes it is a good idea to take a step back and look at the broader picture: why are we doing the (operational) things we do? How are the things we are doing from a technology perspective supporting our overall business goals and our business strategy in the end? How can we use technology to actually drive our business? What business architecture (or organizational structure) best fits my strategy? ITQ can help organizations answer these questions and more. With a series of IT Transformation Services.

IT Strategy and vision development
In most organizations the business strategy, vision and business goals are often crystal clear. However, it may be a challenge to develop a fitting IT strategy and supporting vision, especially if IT is not a core competency.

Where the industry used to talk about “Business and IT alignment”, to a greater extent organizations now expect IT to actually drive the business. Instead of just supporting it through alignment. That is a huge objective! ITQ can help you develop an IT strategy and vision to accomplish that goal. ITQ has extensive experience in developing:

  • (Software-Defined) Data Center strategies
  • (Hybrid) Cloud strategies
  • and Digital Workspace / End User Computing strategies.

ITQ is also capable to assess the readiness of your organization prior to start new IT initiatives. By conducting a readiness assessment, ITQ is able to validate which people and process changes need to be positioned before any new IT initiative can truly succeed.

ITQ determines the maturity level of the organization (or specifically for the IT organization) and assesses what changes have to be made from a strategic, architectural and operational perspective.  ITQ provides readiness assessments across the board, such as:

  • Business Continuity and Disaster Recovery readiness assessmentitts
  • Cloud computing readiness assessment
  • Digital Workspace readiness assessment
  • Automation readiness assessment

Do you have a clear IT strategy and vision?
Do you need help to create or update your IT strategy?  Please contact us. We are looking forward to tell you about our approach and how we are
capable to support you to accomplish your goals.

Review of VMworld US 2016

vmworldus2016-1

Johan van Amersfoort, one of the ITQ consultants and team lead End User Computing, attended VMworld US. Together with more than 20,000 other IT followers Johan attended several of the hundreds of sessions. In this article Johan summarizes his favorite sessions.

Ask the Experts: Practical Tips and Tricks to Help You Succeed in EUC [EUC9992]
In this session a number of EUC Champions answered questions regarding EUC. Hundreds of attendees could ask everything they wanted to know. Johan, as a EUC Champion, was excited to be part of this session. Please come and ask what you want to know about EUC at VMworld Europe in Barcelona or at the VMware Benelux Party in October.

Architecting VSAN for Horizon the VCDX Way [EUC8648R]vmworldus2016-2
A technical session, the title says it all. When you are thinking of designing and deploying VSAN for Horizon, this is a must-see. Especially for Johan as he submitted his design for VCDX-DTM. This session will be available in Barcelona in October.

Solutions Exchange
One of the attractions at each VMworld is the Solutions Exchange: a giant room that contains booths of a wide variety of vendors that all operating in the ecosystem of VMware. Storage vendors, network vendors and software solutions. The Solution Exchange is the place to-be where you can meet and talk with all the vendor specialists.

Meeting friends and making new friends
This is probably the funniest of all. In the Netherlands we have a strong VMware community with a lot of people having an active role due to blogging and presenting. And meeting each other at regular vBeers, organized in places such as Amsterdam and The Hague. And when meeting each other on the other side of the Atlantic Ocean in a city like Las Vegas is almost a guaranty for fun. And so it was!

I hope to see you (again) at VMworld Europe 2016 in Barcelona.

VEEAM : All backup proxies are offline or outdated.

While helping a customer to install and extend their repositories to accomodate their GFS backups I came accross a weird error.
What I did was this: I asked for some additional storage luns from the storage department to extend my physical proxies,
who happens to be also a repository, like I said with an extra repository to house the backup of the GFS backups.

Then all of a sudden, all the backups failed with quite a particular weird status.

Unable to allocate processing resources. Error: All backup proxies are offline or outdated.

Capture

The proxies came back as “unavailable” under the Managed servers location in the Backup Infrastructure pane.
While rescanning solved the problem at first hand immediately, as soon as I tried to restart a backup job it failed
again with the same error and it came back as “unavailable” again.
As I was digging through the log files I stumbled upon the same error, but could not quite figure out what was causing the
error. Everything else was working great, except the backups. Hmm this sounds like a severity 1…

With some help from Christos from VEEAM Support we actually traced back the steps.
First the thing when rescanning the proxies/repositories while they were “unavailable”

We followed the log sequence down to the host/repository rescan that I manually executed in order to
refresh the proxies and we found the following:

[25.05.2016 12:36:30] <07> Error    Failed to connect to transport service on host ‘hostname.proxie.FQDN’ (System.Exception)
[25.05.2016 12:36:30] <07> Error    Failed to connect to Veeam Data Mover Service on host ‘hostname.proxie.FQDN’, port ‘6162’
[25.05.2016 12:36:30] <07> Error    Access is denied

The above message doesn’t necessarily mean that something is broken, but a plausible reason for it could be that connection
through port “6162” to the repositories/proxies was not possible due to local restrictions of firewall or Antivirus.
After a reboot, this restriction might be removed and connection to the repositories/proxies can be successful again, thus
the jobs will run normally again. This could also be some local network issue which might get resolved with a reboot
if there is no FW/AV installed on Veeam server.

What actually caused it was the following:

By design, when I added the drive to the windws 2012 r2 Proxy (also repository) server. The new drive letter for the new
repositorie is D:\, the default already existing repository points to E:\.
So I added the repository, then I thought OOOOPS, the new repository should point to F:\ instead.
What I did then might be the cause of the incident….

I went back to the proxy server, deleted the drive, and created it again as drive F:\ (I did it on both proxies)
went back to the Backup server, and removed the 2 newly created repositories and created them again but then pointed to the F:\ drive.
I thought, by changing the drive letter, the default drive letter also got changed for the existing repositories,
so the backups are actually pointing to a drive / location which is valid, but cannot be found…
Repeated all the steps again a day later, I did the exact same, but then I added the correct drive. And It went 100% correct.

Well, it’s much easier than that to point the backups to a different repository rather than changing disk volume names etc.
If you face this situation in the future, it would be enough to go to backups-> disk and click remove from backups for the backup
set that has been “moved” to a new drive letter. After the above is done, the backups will no longer exist in your
database (make sure not to select “remove from disk” though). Then, by rescanning the repository, Veeam will import the
existing backups to the database from their new location making them available for usage again.
Once they are imported, they can be mapped to their original job by opening backup job settings-> storage-> map backup.
There is also a KB article about this which can help you understand this use case a bit better: https://www.veeam.com/kb1729

With thanks to Christos from VEEAM support for helping and pointing me into the right direction.

Arie-Jan Bodde

 

NSX licensing changes

Disclaimer: This post contained NDA information which was not allowed to be published and has been removed. 

Today VMware has made a change many of us have been asking for, and one which is warmly welcomed: A tiered licensing model for NSX has been announced. One of the biggest issues with NSX for both customers and consultants alike was its pricing: Even if you just wanted to use basic features of NSX such as VXLANs, distributed routing or the edge service gateway functionality you’d have to pay for the full product. Effective from today, three license tiers are available: Standard, Advanced and Enterprise.

 

As can be seen above, multiple options exist for multiple use cases: Enterprise is the full-featured NSX product, Advanced is aimed at most enterprise who do not require features such as Cross-VC NSX or hardware VTEP integration. Standard is suited for small businesses or enterprise customers that specifically use NSX for distributed switching and routing and are not interested in using distributed firewalling, third party integration load balancing or vRealize automation based policydriven security.

In addition to the tiered licensing model, some other changes have been announced.

NSX for horizon has been rebranded to NSX Advanced for desktop. This means that if you buy the per-user license of NSX for horizon view you will always get the advanced license of NSX, which should be sufficient for most horizon use cases that require NSX. The only thing that would be a nice improvement is to include the Cross-vCenter NSX feature in the NSX advanced for desktop license, considering how much effort VMware is putting in the horizon view Pod design.

The enterprise license is now also available on a per-VM (non-perpetual) license model in addition to a per-socket license, which allows you to budget for growth much easier if you are a company that has a significant delta in the amount of virtual machines. Which license is the best for you is entirely dependant on your requirements, virtual environment and features that you wish to implement. Ofcourse, you can always change your license to a higher tier if at any point you decide to implement additional features in the future.

One last change which is very welcomed is the license requirement for both management clusters and DR sites. As described by VMware, you are now no longer required to license physical hosts which have not been prepared for NSX if your NSX manager or controller is running on those hosts. In addition, you are now no longer required to license any hosts that do not have any active workloads running on them, allowing you to run a failover site with products such as Site Recovery Manager without having to pay double the NSX licensing fee (assuming that you are running an active-failover DR setup).

 

NSX licensing changes are effective from the 3rd of may 2016 onwards. For more information, see the official KB at https://kb.vmware.com/kb/2145269.

If you would like to discuss the possibilities of NSX, I am reachable by phone (+31 6 29007866) or email.

SignalR hub authentication with ADAL JS (part 2)

In part 1 of this post I described how to solve the first part of the problem: making sure the JWT token we got from ADAL JS gets sent to the server (i.e. the SignalR hub). Part 2 describes how the server extracts the token, validates it and create a principal out of it. In another post, I already described how to configure an Owin middleware pipeline that does exactly this: via UseWindowsAzureActiveDirectoryBearerAuthentication (and if you Google this extension method you’ll find a lot more information).

So ideally I would tap into the same Owin middleware pipeline that regular ASP.NET requests pass through. Unfortunately, that’s impossible: SignalR uses different abstractions for similar concepts like ‘request’ and ‘caller context’. So there’s some plumbing involved, especially where token validation is involved. I copied some classes from the Katana Project library for that, especially from the Microsoft.Owin.Security.ActiveDirectory package.

The SignalR protocol can be roughly divided into two stages: connection setup and realtime communication over this connection (there’s of course a lot more detail to it). You’d want to authenticate the client and validate its token on the connect, not on every subsequent call. It doesn’t make sense to authenticate each realtime call since these aren’t possible anyway without first connecting.

To implement authentication for SignalR hubs, the AuthorizeAttribute is provided. It implements two interfaces: IAuthorizeHubConnection and IAuthorizeHubMethodInvocation, essentially implementing both SignalR protocol stages: connect and communicate.

So what does this look like? And what can we borrow from Katana to simplify and improve things? First the outline of our JwtTokenAuthorizeAttribute (by the way: I attached a zip file with a VS2015 project containing all code at the end of this post):

[AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = false)]
public sealed class JwtTokenAuthorizeAttribute : AuthorizeAttribute
{
  public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
  {
    // Authorize a connection attempt from the client. We expect a token on the request.
    ...
  }

  public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod)
  {
    // Make sure the context for each method call contains our authenticated principal. No
    // additional authentication is performed here.
    ...
  }
}

And this is how we apply it:

[JwtTokenAuthorize]
public class NewEventHub : Microsoft.AspNet.SignalR.Hub
{
  ....
}

Connection authorization

All that leaves us is implementing the attribute class. First step is getting the token from the IRequest, which is simple:

public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
{
  // Extract JWT token from query string.
  var userJwtToken = request.QueryString.Get("token");
  if (string.IsNullOrEmpty(userJwtToken))
  {
    return false;
  }
  ...

You can see in the first part of this two-part series that I named the query string parameter token but you can give it any name you like of course. If there is no token on the query string, we return false to indicate authentication did not succeed.

The next step is where the magic happens: validating the token and extracting a ClaimsPrincipal from the set of claims in the JWT (JSON Web Token). Validating the token means checking the token cryptographic signature. The question then becomes: what do we check against? Each issued JWT is signed by the private key part of a public/private key pair maintained by Azure AD (assuming of course we actually obtained a token from Azure AD). An application can use the corresponding public key to check the token signature. The public key is found in the tenants federation metadata document. This document is found on the following URL: https://login.windows.net/yourtenant.onmicrosoft.com/federationmetadata/2007-06/federationmetadata.xml.

Lucky for us, a lot of code for handling the federation metadata document and validating the token is already available in the Katana project.

public class JwtTokenAuthorizeAttribute : AuthorizeAttribute
{
  // Location of the federation metadata document for our tenant.
  private const string SecurityTokenServiceAddressFormat =
      "https://login.windows.net/{0}/federationmetadata/2007-06/federationmetadata.xml";

  private static readonly string Tenant = "yourtenant.onmicrosoft.com";
  private static readonly string ClientId = "12345678-ABCD-EFAB-1234-ABCDEF123456";

  private static readonly string MetadataEndpoint =
      string.Format(CultureInfo.InvariantCulture, SecurityTokenServiceAddressFormat, Tenant);

  private static readonly IIssuerSecurityTokenProvider CachingSecurityTokenProvider =
      new WsFedCachingSecurityTokenProvider(
          metadataEndpoint: MetadataEndpoint,
          backchannelCertificateValidator: null,
          backchannelTimeout: TimeSpan.FromMinutes(1),
          backchannelHttpHandler: null);

  public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
  {
    // Extract JWT token from query string (which we already did).
    ...  

    // Validate JWT token.
    var tokenValidationParameters = new TokenValidationParameters { ValidAudience = ClientId };
    var jwtFormat = new JwtFormat(tokenValidationParameters, CachingSecurityTokenProvider);
    var authenticationTicket = jwtFormat.Unprotect(userJwtToken);

    ...

We start with the JwtFormat class. This class is used to extract and validate the JWT. It’s in fact a wrapper around the JwtSecurityTokenHandler class with the added bonus of ‘automatic’ retrieval of SecurityTokens from the tenants federation metadata document (in this case a X509SecurityToken).

The tenants security tokens are retrieved through the IIssuerSecurityTokenProvider interface. Unfortunately, this is where code reuse ends and copying begins. There exists an implementation of IIssuerSecurityTokenProvider that is also used in the pipeline you set up when using UseWindowsAzureActiveDirectoryBearerAuthentication: WsFedCachingSecurityTokenProvider. This class handles communication with the federation metadata endpoint, extracts the security tokens necessary to validate the JWT signature and maintains a simple cache of this information; just what we need. However, this class is internal. And it uses a number of other internal classes.

So what I did for my project was copy all the necessary classes from the Microsoft.Owin.Security.ActiveDirectory Katana project. In the code above, you see the WsFedCachingSecurityTokenProvider configured with just the URL for the metadata document (and a timeout that governs communication with the metadata endpoint). Simple as that. The call to JwtFormat.Unprotect takes care of the rest.

The next steps are some obligatory checks against the AuthenticationTicket:

  public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
  {
    // Extract and validate token.
    ...

    // Check ticket properties.
    if (authenticationTicket == null)
    {
        return false;
    }
    var currentUtc = DateTimeOffset.UtcNow;
    if (authenticationTicket.Properties.ExpiresUtc.HasValue &&
        authenticationTicket.Properties.ExpiresUtc.Value < currentUtc)
    {
        return false;
    }
    if (!authenticationTicket.Identity.IsAuthenticated)
    {
        return false;
    }

    ...

The ticket shouldn’t be null, it should not be expired and the identity should be authenticated. The final step is to somehow store the authenticated identity so that we can use it in our SignalR hub method calls. Remember, we are still just connecting with the hub and not calling any methods on it.

  public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
  {
    // Extract and validate token, check basic authentication ticket properties.
    ...

    // Create a principal from the authenticated identity.
    var claimsPrincipal = new ClaimsPrincipal(authenticationTicket.Identity);
 
    // Remember new principal in environment for later use in method invocations.
    request.Environment["server.User"] = newClaimsPrincipal;

    // Return true to indicate authentication succeeded.
    return true;
  }

We create a ClaimsPrincipal from the identity and store it in the environment under the key server.User. You may wonder where this key comes from. The core Owin spec defines a number of required environment keys and the Katana project extends this set. One of the extension keys is server.User which should be of type IPrincipal.

Method invocation authorization

Remember that the SignalR AuthorizeAttribute implemented two interfaces. We have implemented IAuthorizeHubConnection so what’s left is IAuthorizeHubMethodInvocation. This code is a lot shorter:

  public override bool AuthorizeHubMethodInvocation(
      IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod)
  {
    HubCallerContext hubCallerContext = hubIncomingInvokerContext.Hub.Context;
    var environment = hubCallerContext.Request.Environment;

    object claimsPrincipalObject;
    ClaimsPrincipal claimsPrincipal;
    if (environment.TryGetValue("server.User", out claimsPrincipalObject) &&
        (claimsPrincipal = claimsPrincipalObject as ClaimsPrincipal) != null &&
        claimsPrincipal.Identities.Any(id => id.IsAuthenticated))
    {
        var connectionId = hubCallerContext.ConnectionId;
        hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId);
        return true;
    }
    return false;
  }

Here we pick the ClaimsPrincipal from the environment where it was stored in the connection process. If we find it, we create a new HubCallerContext using the environment containing the principal.

Calling hub methods

Well, we’re finally where we want to be: actually call a SignalR hub method with a principal that originates from the JWT we sent from the client. A sample hub method may look like this:

[JwtTokenAuthorize]
public class NewEventHub : Microsoft.AspNet.SignalR.Hub
{
  public async Task CopyEvent(int eventId)
  {
    // Get current principal.
    var currentPrincipal = ClaimsPrincipal.Current;
    var currentIdentity = currentPrincipal.Identity;

    // Do stuff that requires authentication.

    return "Copy event successful";
  }
}

Note that we did not have to get the principal from some environment using the server.User key. This is because SignalR internally uses the same Owin classes as the Katana project so a principal stored in the environment as server.User is automatically translated into a principal on the current call context.

SignalRJWTAuth.zip

Credits

Some credits should go to Shaun Xu for this blog post. It shows where in the environment to store the authenticated principal and how to set the context so method calls have access to this principal.

Veeam® Availability Suite™v9 released

A few weeks ago Veeam released a new version of it’s Backup & Replication Suite. Veeam® Availability Suite™v9.
For everybody who lived under the “Backup Rock” the past few years:

Veeam® Availability Suite™ combines backup, restore and replication capabilities of Veeam Backup & Replication™ with the advanced monitoring, reporting and capacity planning functionality of Veeam ONE™ for VMware vSphere and Microsoft Hyper-V.

So what’s new in Veeam® Availability Suite™v9?

Veeeam whats new

Veeam® Availability Suite™ v9 has a lot of enhancements and new features, but improvements around primary and backup storage will surely be one of the biggest changes of this new release.

  • Scale out Repositories
  • integration with EMC snapshots
  • Veeam Cloud Connect (Now with replication)
  • Direct NFS Access
  • BitLooker
  • On-Demand Sandbox™ for Storage Snapshots
  • Backup from NetApp SnapMirror and SnapVault
  • Veeam Explorer for Oracle
  • Standalone Console
  • Remote Office/Branch Office (ROBO) Enhancements
  • Advanced Tape Support

A few of the new additions above I will describe below.

Scale out Repositories

Managing backup storage can be a tricky business. This is largely due to the fact that the exponential rate of data growth is outpacing the ability to manage it in efficient ways. Physical storage units simply hit the maximum configurations, are unable to be expanded at all or backup storage is just barking on the lower end of the tree because of the costs.

Scale out Backup Repository provides an abstraction layer over individual storage devices to create a single virtual pool of backup storage to which to assign backups.

With this you can extend the repositories when they will run out of space instead of facing long and complicated relocations of backup chains (which can become huge in large customers), users will be able to add a new extent (that is a “simple” backup repository) to the existing scale-out repository. All existing backup files will be preserved, and by adding an additional repository to the group, the final result will be the same target for backups getting additional free space, immediately available to be consumed. This get’s a software defined touch by Veeam. To me this is the “software defined backup repository”.

A webinar about the Scale out Repositories is available here and a simple explanation of this on the Veeam Youtube channel.

Direct NFS Access

NFS users have felt a bit like second-class citizens in the virtualized world and the lack of direct access to NFS storage to read data during backup operations was surely one reason to “envy” block storage users. Now, in v9, something similar to Direct SAN processing mode has been made available for NFS as well. This new feature is called Direct NFS. With it, any new Veeam proxy will run a new and improved NFS client to directly access any NFS share exposed to VMware vSphere, supporting both the traditional NFS v3 and the new NFS 4.1 available in vSphere 6. With the complete visibility of single files allowed by the NFS share, NFS users will be able to backup and replicate VM’s directly from the NAS array and avoid the need to cross the hypervisor layer for their activities. This will result in faster backups and an even smaller load on production workloads. So say goodbye to the forced “NBD” mode for NFS. Be aware that when you upgrade from Veeam v8 you will need to deploy new proxies to be able to use Direct NFS.

directNFS-609x253

Standalone Console

Veeam’s standalone console provides every user convenience, flexibility and ease-of-use by separating the Veeam Backup & Replication console from the backup server. This new installation can be done on laptops and desktops, forever eliminating RDP sessions to a backup server.

This also makes it possible to manage multiple separate Veaam backup servers from the comfort of your own system. Also Multi-user support has been added. Backup administrators will now be warned of conflicting edits when attempting to save changes after editing the same job concurrently.

Enterprise Manager

The Exterprise manager also has been given a useful update. Single sign-on got the ability to use native Windows Authentication in Active Directory environments. So no more senseless logging on when you’re already logged on into the domain.

Veeam Automation

RESTful API intergration is now available in all product editions when a per-VM license is installed. This is a very welcome addition for like Service Providers to automate a lot of Veeam functionalities within there own customer portals.

To wrap this blog up, Veeam has done a pretty good job with v9. Also listening to the community for feedback and adding those much wanted features. Gotten curious on all the other new features and what they bring? Hop over to the Veeam website and catch up your reading quota’s.

What’s New in User Environment Manager 9.0

VMware has announced news about the coming release of User Environment Manager 9.0! After last year’s acquisition of Immidio the first real changes are becoming visible in User Environment Manager 9.0. This blog will guide you through the key changes and new features.

Horizon Cloud Manager
With the release User Environment Manager 9, VMware has made the first step to cloud based management. In order to achieve this, the engine of ‘Project Astro’ delivers the new interface to User Environment Manager 9.  Currently, not all Management Console features are migrated in this release, so it’s a hybrid situation at the moment.

Application Authorization
You can use this feature to specify which users can run particular applications in your organization. Another cool feature: it is fully compatible with VMware App Volumes. For example, it isn’t necessary to manage the security aspects for you can now integrate this with Application Authorization. Black- and whitelist applications are based on configured paths and executables.

Horizon 7 Smart Policy
A highly requested feature was applied policies based on different conditions between different locations and devices. Horizon 7 Smart Policies now brings contextually aware, fine-grained control of client-side features. IT can selectively enable or disable features like clipboard redirection, USB, printing, and client drive redirection. All of this can be enforced based (for example) on location, named user, client mac address and even any given VDI pool.

ThinApp Customizations
In mixed environments seamless roaming of user settings between ThinApped applications and locally installed applications were always a pain. With this new release seamless roaming with ThinApp 5.2 (or higher) has been improved. It’s now even possible to roam your Microsoft Outlook signature without any issues!!

Control Personal Data
Using Microsoft Group Policies for Folder Redirection was always the best practice until now. User Environment Manager 9.0 brings the Folder Redirection extension to the Management Console, thus offering this functionality from the same interface without the need of using the traditional Group Policy interface.

In my opinion, the User Environment Manager Management Console is now a fully Single Pane of Glass for simplifying end-user profile management. I’m looking forward to get some hands-on time with User Environment Manager 9.0 when it’s officially released!

A New Era into EUC – What’s New in Horizon 7

A New Era into EUC – What’s New in Horizon 7

Today VMware has announced exciting news on the release of Horizon 7!
In this release of Horizon 7 they made great changes in architecture as well as under the hood.
This blog will guide you through the changes and the great new features.

Architecture
For Horizon 7 VMware extended the capabilities of facilitating desktops and applications by releasing the following two products:
• VMware Horizon Air Cloud-hosted
• VMware Horizon Air Hybrid

With Horizon Air Cloud-hosted architecture you can now deliver desktops and applications directly from the cloud. With this architecture you can easily connect your on-premises AD with Horizon Air to maintain access and security management. Another great feature about this is that you can chose the specific desktop, for any user workload from an easy to use GUI.

A lot of people often raise the question that they want to add temporarily extra VDI resources. With the on-premises install you must buy or rent extra hardware to facilitate these needs. Now with Horizon Air Hybride you can simply add extra resources to your Horizon infrastructure from the cloud. With this hybride infrastructure, you can also create a Horizon DR location in the cloud, so that you are prepared for any kind of failure.

Scalability
With every new version from Horizon that is launched, they raise the maximum numbers to new heights. For Horizon 7 they now support a Cloud Pod which scales up to 50.000 session across up to 10 sites, with 25 Pods of infrastructure.

Just in Time Delivery of Desktops
Horizon 7 comes with a new technique cold “Instant Cloning”. With this new technology it is possible to “hot-clone” virtual machines. How does this process work? The running parent VM is quiesced, when hot-clone comes into action, and rapidly leverages the same disk and memory of this VM.
Because the parent VM is in a running-state there is no boot-storm and additional configuration during the boot is not necessary.

One of the questions that I get a lot is, how to keep my Windows desktops up-to date. Now with hot-cloning this isn`t an issue anymore. The parent VM is always running and patches witch are applied to this VM will also be included in the new desktops.

For more information about Instant Cloning (Project Fargo) I refer to a great blog written by my colleague Johan van Amersfoort. http://itq.nl/project-fargo-and-app-volumes-hans-klok-meets-euc/

Blast Extreme
We already know the Blast protocol who introduced us into HTML5 access for Horizon View.
But now, his big brother Blast Extreme kicks in with great new features. Blast Extreme uses the H.264 display technology which is optimized for the widest array of devices. With this new built-in protocol the following advantages are realized:
• 3D functionality vGPU support for Nvidia Gird cards
• Longer battery life on mobile devices by lowering CPU consumption
• Less network bandwidth consumption
• Leverage both TCP or UDP transport capabilities
• Support for unified communications
• Adapts better to lossy networks, build for the Cloud

Access Points
Access Point (formerly known as Security Servers) is the unified security solution, which is placed as a front-end server for all Horizon products. From a security perspective this server has become (also in Horizon 6) a hardened Linux appliance which can be easily deployed and scaled-out.

In the former versions of the Access Points it was necessary to pair a Access Point to a Connection Server. In this setup the authentication level and methods where attached to the Connection Server.
In the new Access Points the following authentication methods can now be handled natively in-skin, within the appliance:
• Radius
• RSA SecurID
• SmartCard

Smart Policies
With Smart Policies Horizon 7 introduced a robust suite of security an policy focused capabilities, which can help customers to improve their security posture. The security management of the older Horizon (View) versions was based on user groups or desktop pools. With the introduction of Smart Policies they introduced Policy-Managed Client Features. With policy based access it is now possible to attach security to roles which will be distributed to users. The policies can be based on location, unsecure login or evenly user based.

For example users who login from a unsecure network, cannot use their USB-devices.
With Smart Policies the security can be centrally managed, IT is finally back in control!

Other Horizon 7 improvements
The following improvements are implemented into Horizon 7.
• AMD GPU Support with vDGA
• Intel vDGA graphics support in Intel Xeon E3
• Flash redirection to the end-point
• URL content redirection to the end-point
• Horizon Clients are updated for all Operating Systems
• Horizon Certifications are meeting requirements of FIPS 140-2

SignalR hub authentication with ADAL JS (part 1)

In a previous post I described how to use ADAL JS with Azure AD role-based authorization. This works fine when you’re securing a Web API or MVC backend. However, what about SignalR hubs? In short, SignalR enables real-time communication between a client and a web server. The client can call methods on a so-called hub and the server can push messages to clients (all clients, a specific group of clients, only the calling client, etc). SignalR is an abstraction over a number of transport methods. Preferably WebSockets but if either browser or server does not support this, a number of fallback protocols exist: server-sent events, forever frame or Ajax long polling.

Suppose you have a HTML/JS front-end and a back-end that exposes a SignalR hub. The official documentation suggests that you should integrate SignalR into the existing authentication structure of the application. So you authenticate to your application, inform the client of the relevant authentication information (username and roles for example) and use this information in calls back to the SignalR hub. This seems a bit backward if you ask me.

I already have ADAL JS on the client (browser). ADAL JS provides the client with a JWT token that is stored in local or session storage. So the first question is: how do we configure SignalR on the client to send the token along with requests to the SignalR hub on the server. That’s the topic of the current post. In the next post, the server-side of things will be handled.

On the client I use AngularJS and jQuery so I also use the ADAL JS Angular wrapper. This makes initialization easier and allows you to configure ADAL JS on routes to trigger authentication. So the code samples assume that you use AngularJS and ADAL AngularJS. The client-side SignalR library allows for easy extension of the SignalR connect requests from the client to the server, as shown in the following example:

// Id of the client application that must be registered in Azure AD.
var clientId = "12345678-abcd-dcba-0987-fedcba12345678";

var NotificationService = (function () {
  // Inject adalAuthenticationService into AngularJS service.
  function NotificationService(adalAuthenticationService) {
    this.adalAuthenticationService = adalAuthenticationService;
  }

  NotificationService.prototype.init = function () {
    var self = this;

    $.connection.logging = true;
    $.connection.hub.logging = true;
    $.connection.hub.transportConnectTimeout = 10000;

    // Add JWT token to SignalR requests.
    $.connection.hub.qs = {
      token: function () {
        // Obtain token from ADAL JS cache.
        var jwtToken = self.adalAuthenticationService.getCachedToken(clientId);
        return (typeof jwtToken === "undefined" || jwtToken === null) ? "" : jwtToken;
      }
    };
    $.connection.hub.start();
  };
  NotificationService.$inject = ["adalAuthenticationService"];
  return NotificationService;
})();
appMod.service("notificationService", NotificationService);
appMod.run(NotificationService);

The magic happens on the $.connection.hub.qs property. Parameters specified there are sent on subsequent SignalR negotiate, connect and start requests1. So we’d expect a token parameter in our case. When recording network traffic between client and server we can see this is actually happening:

SignalR network traffic

And here are the details of the connect request. You can see that subsequent ping requests also contain the ADAL JS JWT token:

SignalR network traffic connect

So that’s it for the client side of things. In the next post we switch to the server and see how to intercept the token and use it to create a principal that can be used for authorization.

Notes
  1. There’s an excellent explanation of what happens on the wire with SignalR here.