Tuesday, 27 December 2011

Implementing Azure AppFabric Service Bus - Part 1


Introduction

When business opportunity grows to global market, the requirement also grows globally with that in all level of interaction with different types of customers. When the business interacts with lots of customers, the data interaction also grows heavily. So there is a big demand to collect all the data exposed by other partner’s related to our system and process it with our company data to get meaningful reports how business is doing. It would also good to consider exposing data by web services and consume it using program instead of re-feeding/importing the data into manually into our system.

Microsoft Azure AppFabric Service Bus provides a seamless way of connecting other systems and interact the data with it. The Service Bus uses WCF service as the basic architecture to expose the data to outside and to consume; so all the features available in WCF can be reused while developing Service Bus.

In a normal, when connecting two systems each one sitting in different premise becomes very tedious job for the developer to do custom coding and testing. It may also break the security policies from the organizational point of view. Using Service Bus, it is very easy to expose the data to other organization thro’ internet using no security (firewall) breaking mechanism and can control everything by on-premise itself.

Some of the organizations hold very critical data such as financial, Parma sectors may not accept to keep the database in cloud (in Azure Storage Services, SQL Azure or Amazon S3). But they required exporting some data to public in some business requirements and provide a secure mechanism to access the data by third party vendors. Service Bus can play a perfect role where the database can sit in on-premise and expose the data to public via a service call in very standard fashion defined in WCF.

Currently Service bus provides two different way of messaging capabilities with Service Bus.

  1. Relayed Messaging:

    In Relayed Message, the Service (Service Bus) and the consumer will be in online at the same time and communicate each other synchronously using relay service.

    In Relayed Messaging, the on premise Service Bus first reaches the Relay Service sits in Azure environment using an outbound port and creates a bidirectional socket communication between the Relay Service and the on premise service. When client wants to connect service for communication, it sends message to Relay Service. The Relay Service makes connectivity between on-premise service and requested client. The Relay Service acts as a communication channel between client and service. So the client does not required to know where the service running.

    The Relay Service also supports various protocol and Web service standards including SOAP, WS-* and REST. It supports traditional one-way messaging, request/response messaging, peer-to-peer messaging and also support event distribution scenarios such as publish/subscribe and bi-directional socket communication.

    In this messaging, both the service bus and client required to be in online for any communication.

  2. Brokered Messaging:

    The Brokered Messaging provides asynchronous communication channel between service and client. So the service and the consumer do not required to be in online at the same time.

    When service and client sends any message, the messages will get stored in Brokered Message infrastructure such as Queues, Topics, and Subscription until other end ready to receive the messages. This pattern allows the applications to be disconnected each other and connect on demand.

    The Brokered messaging enables asynchronous messaging scenarios such as temporal decoupling, publish/subscribe and load balancing.

In this post, we will see how Relayed Messaging works with a real time implementation.

How Relayed Messaging Works:

The Relayed Messaging communications accomplished using a central Relay Service, which provides connectivity between the service and client. Below is the architecture of how Service Bus works.


As per the diagram, the service and client can sit in the same premise or in different premise. The communications are happening as defined below.

  1. When the service hosted in an on-premise IIS server, it connects the azure relying service using an outbound port.
    There are some ports requires to be opened on on-premise server for this connection. This depends on the binding method which is used with service bus. The below link provides the ports configuration requirements for each binding.
    http://msdn.microsoft.com/en-us/library/ee732535.aspx
  2. The service reaching relying service via on-premise firewall. So, all the security policies applied to the firewall will be applied with this connection.
  3. Once the relying service receives the request from service, it creates a bi-directional socket connection with the service using a rendezvous address created. This address will be used by relying service for all the communication with the service.
  4. When the client requires communication to the client, it first sends the request to relying service.
  5. The relying service forward the request to the service using the bi-directional socket connection already happened using a rendezvous address.
  6. The service gets the request and returns back the required response to the relying service.
  7. The relying service will again return back to the client.

Here, the relying service acts as a mediator between service and client. As client never knows where the service is running, the client won’t be able to connect to the service at initial request. But once an initial connection happened with the service using relay service, the client can communicate to the service directly using Hybrid connection mode.

By looking at this conceptually everything is understandable. But when I read at the first time, I had a question, how this actually works? What makes the WCF Service which sits on my on-premise actually communicates with one sits on the cloud?

One of the major concepts which enable the WCF Service to relay on the cloud is the binding method which comes as part of SDK. Yes. To make a WCF Service discoverable to the public, it must be using any of a relay binding such as BasicHttpRelayBinding, WebHttpRelayBinding, NetTcpRelayBinding, WS2007HttpRelayBinding, NetOnewayRelayBinding, NetEventRelayBinding. When a service uses relay binding, it actually delegate the listening responsibility to the relay service which sits on the cloud. So when any incoming message comes to the relay service, it will just forward the message to the on-premise service.

Below is the list of relay bindings which enables the standard WCF to relay on the cloud.

Standard WCF Binding Equivalent Relay Binding
BasicHttpBindingBasicHttpRelayBinding
WebHttpBindingWebHttpRelayBinding
WS2007HttpBindingWS2007HttpRelayBinding
NetTcpBindingNetTcpRelayBinding
N/ANetOnewayRelayBinding
N/ANetEventRelayBinding

As defined in the list, each standard WCF binding has its corresponding relay binding, so it will be very easy to enable the standard WCF service to relay on the cloud. But only the last two bindings (NetOnewayRelayBinding, NetEventRelayBinding) are specially designed to work with Service Bus.

When a relay service required to forward any message to on-premise service, it actually won’t talk to on-premise service directly. Instead it registers an address with HTTP.SYS and asks it to forward all the messages to the registered address. The same will happen at the on-premise service – which registers an address to HTTP.SYS and forward all message to the registered address. So both the cloud and on-premise will communicate using a registered address.

As Service Bus is required direct socket connection with the on-premise service, the on-premise service first required to let Service Bus to know where it is running. So, the service can get connected with Relay Service by creating a rendezvous address. Once the address gets registered, all the communication will happen through the registered address.

To expose an on-premise service to public, it required some out-bound ports to be open in the network. This depends on the binding method which is used with service bus. The below link provides the ports configuration requirements for each binding.
http://msdn.microsoft.com/en-us/library/ee732535.aspx

Using Windows Server App Fabric Hosting Service

The WCF Service should be running all the times for receiving messages from the Service Bus. Once the service stopped running (or disconnected) the public endpoints will be unavailable for the client to consume the service. So we required the service running after the system restarts, IIS restarts etc., Windows Server App Fabric Hosting Service is an another Microsoft Release, which enables us to make the WCF Service starts running when the dependent starts.

Let us understand how this Windows Server App Fabric helps WCF Service for exposing to Service Bus:


As defined in the above picture when a WCF Service exposing to service bus, the following steps are processed.
  1. Register the service endpoints by giving unique endpoint url.
  2. The service bus exposing to public using the defined endpoint and the interface given.
  3. Using the service bus name and secret keys, a client can discover the endpoint address that Service bus exposing.
  4. Once the client knows the endpoint address, it can invoke operations on the service bus endpoint.
  5. The service bus invokes the operations in the on-premise service for each call from the client by using the connection already made.
As per the steps defined, the WCF service registers its endpoint with the Service Bus (as per step 1) and establishes a TCP connection. After the connection gets open with the WCF Service and Service Bus, the connection will remain open to enable the Service Bus to call the on-premise service when the client invokes a call to it. When a client calls Service Bus for an operation, the Service Bus makes a return call for the connection already established from the on-premise service. So, the on-premise network firewall sees this as a return traffic already made throw the service and will not block it anyway. As the connection is open forever, the registered address will remain same for further communications.

The Windows Server App Fabric Hosting Service starts the WCF Service hosted on the IIS Server for registering and making connection with the Service Bus all the times (Step 1).

This architecture provides more flexibility for controlling and managing the communications. With this model, it is easy to communicate with one-way messaging, request-response, publish-subscribe (multicast), and asynchronous messaging using message buffers.

Security in ServiceBus

As the service is published using a public endpoint and any one can consume the service using internet, how this service is secured? As the service exposing critical data to public how it can be controlled using security mechanism? To answer this question, there are various factors comes into picture. Some points are follows:

The Service Bus will be exposed to public with endpoint addresses, secret name and key. When third party vendors consume the service using a public endpoint, it can be done using the secret name and key values when it was exposed with. In case, if there is a suspect somebody consuming the service without source organization knowledge can immediately stop the service consume by changing the secret key and informing to the genuine vendors with new key value. The full service can also be stopped by stopping the web sites from on-premise IIS console where the service exposed, so no one can consume the service.

Another important point to mention related to security is Access Control Service (ACS). ACS is a part of Azure AppFabric and used to provide identity and security to the services using standard identity provides such as Windows Live ID, Google, Yahoo, Facebook and Active Directory Identity of Organization. So the end user can access the services using any/multiple identity provides.

For Ex: If a service secured by Windows Live ID, the service can be accessed after successful login with Live ID credentials. It also provides security using ADFS integration with on-premise Active Directory, so any organization employee/customer can access the service from anywhere by using their organization AD login credentials. It also helps to not create and manage any separate login credentials for each application and the employees not to remember many credentials.

This post will give an introduction about What is Service Bus and how Relayed Messaging works. For more information on Azure App Fabric Service Bus, please refer the following links:

http://www.microsoft.com/windowsazure/features/servicebus/
http://msdn.microsoft.com/en-us/library/ee732537.aspx

From the next post, I will be taking a real time implementation for implementing Relayed Messaging Service Bus.


2 Responses to “Implementing Azure AppFabric Service Bus - Part 1”

  • Unknown says:
    3 September 2012 at 09:27

    Very esay to understand
    but i have one question regarding service
    when the service is runing on the permises,we need to deploy the service in cloud also or only in permises.

  • Thirumalai M says:
    3 September 2012 at 11:24

    Hi Rajesh, The service will be running in the on-premise only. The consumer (or the client) can run in the cloud or (other) on-premise server.

    One usecase will be - our service will be running in our enterprise on-premise and expose as service thro' service bus. The other partner on-premise application can consume and process our service without directly connecting to our network. If the partner hosting the client app in cloud (Azure), can also do it.

Post a Comment