EAI Guy.net

Enterprise Applicaiton Integration and SOA 2.0

Securing ServicePulse

We are using NServiceBus and the awesome new suite of monitoring tools, and go-live is just around the corner. We are hosting our audit and error queues on a dedicated audit server, as recommended, along with ServiceControl and ServicePulse. How do we configure authorization for the ServicePulse website to allow a select group of IT Ops users to access the site without opening up access to the whole company?

Self-Hosted Default

By default, ServicePulse runs as a self hosted web server with no option to add authentication or authorization:


Hosting ServicePulse in IIS

However, ServicePulse also has a feature for extracting website files to a folder, like this:

C:\Program Files (x86)\Particular Software\ServicePulse>ServicePulse.Host.exe --extract --serviceControlUrl="http://localhost:33333/api" --outPath="C:\temp\SpWeb"

This enables you to create your own IIS website with a few clicks:


And now you have an IIS-hosted ServicePulse website to which you can add Windows auth or another authentication and authorization mechanism:


So What About ServiceInsight?

Unfortunately, Particular Software does not yet provide a means for enabling user-level authorization on the ServiceControl REST API, so the options for accessing ServiceInsight are:

  1. Leave the SC REST API as only accessible on the server (default behavior), which requires users to remote into the server to use ServiceInsight
  2. Set a custom host name for the SC REST API and expose it to everyone on the network

Neither of these options feel very satisfying to me. Please add any thoughts  and suggestions here: https://github.com/Particular/ServiceControl/issues/400


If you are setting up a new NServiceBus installation or are upgrading to the Particular Platform from an older version of NServiceBus, I hope this post helps you secure your ServicePulse dashboard.

Questions for Estimating Software-Integration Development Time


image copyright vladtenu.com

I have been guilty of severely underestimating the time required to integrate software applications – think order of magnitude. This post lists questions that I would have asked, in hindsight, to flesh out integration estimates, since the more line-items an estimate contains, the higher, and consequently more accurate, the estimate.


These questions originated when developing a greenfield application that communicated with an existing system. There are additional considerations when integrating multiple greenfield or multiple existing systems, but many of the questions in this post still apply. The questions below refer to the greenfield application as “our application”, and the existing application as the “external system”.

External-System Questions

  • Is the business on the latest version of the external system?
    • Is the vendor scheduled to end support the business’ version? If so, when?
    • How frequently does the version change?
    • Has the business already considered upgrading? If not, they should consider upgrading to prevent integration re-work.
  • Is up-to-date, sufficiently detailed technical documentation available for the external system?
  • Has a system architect been identified who knows how data flows through the external system and what integration capabilities the system has? exclamation_octagon_fram Finding the expert can take time.
  • Is there a test instance of the external system we can test against?
    • Have we been granted sufficient permissions to test against this instance?
    • Have we verified our access?
  • Is sample data from the external system available?
    • Is a sufficiently large set of sample data available to provide complete coverage of all variations that could affect the integration?
    • Is the sample data accurate? Specifically, is the sample data human generated, or is it generated by the external system in the same way the system will generate data that will be used by the integration?
  • What is the quality level of the data in the external system?
    • Does the external system implement validation to ensure data quality? If so, which validation methods does the system use?
      • Database constraints?
      • Application validation? exclamation_octagon_fram Do not assume that application-level validation rules are reflected in the data store, since application validation rules can change over time.
    • How will our application correlate records with the external system? What unique identifiers will be used?
    • Are these identifiers truly unique in the external system, or are there duplicates?
  • Does the external system support pushing data to our application when events occur? asterisk_orange
    • Yes
      • On what interval – real time? Hourly? Daily?
      • Is the frequency sufficient for the needs of the integration?
    • No
      • Will we be responsible for writing code to extract data from the external system’s data store? exclamation This introduces significant risk.
      • Has the business granted us needed access to the data store?

Integration Questions

  • How many distinct entities (types of data) will be integrated?
    • Will all entities be transmitted in the same manner?
    • How many distinct business scenarios will use these entities? Will each scenarios require its own mapping logic?
  • What are the temporal requirements of the integration?
    • Real time?
    • Semi real-time (< 1 minute)?
    • On an interval or schedule?
      • What interval configurability is required?
      • What scheduling flexibility is required?
  • Will the integration be required to store any data?
  • Do the systems have any different or conflicting data constraints?
  • What error handling and reporting features are needed? asterisk_orange
    • To whom does our application need to display failed records and errors? (choose many)
      • IT?
      • Business users?
      • The external system vendor?
    • Where do errors need to be communicated to the client? (choose many)
      • Errors in log files?
      • Notifications via email, etc.?
      • A work-list of bad records for business to review and take action on? Which actions?
    • Will the integration be responsible for storing integrated data in a way that will enable re-submission of failed transactions? Click for footnote
  • What throughput is required of the integration?
    • What peak data volume does the integration need to support?
    • What is the maximum individual transfer size that needs to be supported?
  • What security methods are required?
    • Will our application authenticate against the external system? What authentication mechanism will be used?
    • Will our application have to store credentials?
    • Does our application need to store credentials in configuration files? Do those configuration files need to be encrypted?
  • Is there any sensitive data involved in the integration (SSNs, birth dates, personal health information, personal financial information, etc.)? exclamation This introduces security and liability concerns.
    • For testing, will we have access to real data?
    • How will we secure the data in our test environment?

Post-Development Questions

  • How will health-monitoring be performed once the integration goes live? asterisk_orange
  • Is any support documentation required? Are instruction for locating and managing failed records needed?
  • Will the business require our assistance to verify the integration during user-acceptance testing?


This post is intended to be a starting point, spurring one’s imagination to the range of possible considerations when estimating effort levels in software integration. I still strongly recommend including additional estimate buffer, since Hofstadter’s_Law certainly applies.

asterisk_orange Advice: Don’t re-invent the wheel. Integration services like NServiceBus (simple, low-cost, yet powerful), BizTalk (complex, high-cost), and others include out-of-the-box capabilities for publish-subscribe, asynchronous messaging, automatic retries, failed-message re-submission, and health monitoring.

BizTalk 2010 – Converting WSE Send Port to WCF

This post outlines my efforts to upgrade a BizTalk application that calls a deprecated Web Service Enhancements (WSE) web service.


This post covers creating a BizTalk 2010 WCF send port and required pipeline components to call a WSE web service. In my scenario, I used a two-way WCF-Basic send port subscribing to messages coming from a receive location. The response returned from the WSE web service contained an escaped XML payload, so I wrote a custom pipeline component (perhaps fodder for a future blog post) to un-escape the XML. The response also contained a success/error status field, which I promoted. I created a send port subscribing to the Success status, and an orchestration subscribing to the Error status for generating ESB fault messages.

Step 1: Capture a WSE HTTP Request

My first step in replicating WSE send-port functionality with WCF was to try capturing an HTTP Post from my BTS 2006 WSE send port.

  • I first tried using Fiddler to intercept the request, which required me to set the WSE port’s proxy to However, this proxy address caused BizTalk to suspend the message with this error: Invalid URI: The hostname could not be parsed.
  • I then turned to Wireshark with better luck. I removed the Fiddler proxy settings, started a capture, kicked off a message, and saw the HTTP Post show up in Wireshark. Right-clicking the record and selecting Follow TCP Stream converted the stream to readable text:Image
  • The HTTP post body is displayed below; this is the SOAP format I needed to replicate with WCF.
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/03/addressing" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
    <wsse:Security soap:mustUnderstand="1">
      <wsu:Timestamp wsu:Id="Timestamp-45641c38-06a4-4b44-b382-6d35471e275f">
      <wsse:UsernameToken xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" wsu:Id="SecurityToken-a4c6efe3-4d17-47f8-8d1a-d3239fe3cdcf">
        <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">MyPassword</wsse:Password>
    <ns0:Export xmlns:ns0="http://mySite.com/webservices/">
      <ns0:xmlInputDocument>&lt;?xml version="1.0" encoding="utf-8" ?&gt;&lt;moreStuff /&gt;</ns0:xmlInputDocument>

Step 2: Create WCF Schema and Bindings and Test

I then generated schemas and bindings, and tested hitting the WSE web service with a WCF adapter to see how far off from the above format I was.

  1. I used BizTalk’s “Add Generated Items” feature to kick off the WCF Service Consuming Wizard, which generated the a schema and a bindings file.
  2. After deploying the application, I imported the generated bindings file to create a WCF-BasicHttp send port.
  3. I set the port to use a proxy address of and so I could inspect the HTTP Post and response with Fiddler.
  4. I then setup various receive and send ports to test the WCF Send Port and kicked off a message.
  5. The result was the following error returned from the WSE web service: Unexpected System Exception:#NullReferenceException – Object reference not set to an instance of an object. received – contact Application manager.
  6.  Fiddler showed that the SOAP request generated by my WCF port was missing a <header> element:
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
(no <header> element here)
    <ns0:Export xmlns:ns0="http://mysite.com/webservices/">
      <ns0:xmlInputDocument>&lt;?xml version="1.0" encoding="utf-8" ?&gt;&lt;moreStuff /&gt;</ns0:xmlInputDocument>

Step 3: Generate SOAP Headers

The next step was to generate the SOAP headers, then get the WCF adapter to include the headers in the request it generates.

  1. I learned from Mikael Sand’s blog post on Setting custom SOAP headers in the WCF adapter that I needed to create a OutboundCustomHeaders context property and populate it with my SOAP header information so the WCF adapter would include my headers in its request. I accomplished this via a custom Assemble pipeline component, as described in the next section.
  2. A closer review of the SOAP header revealed a mix of static and dynamic content, as highlighted below in blue and red, respectively. The next section outlines my solution. I found Ben Powel’s blog post helpful in generating the WSE UsernameToken element.


Step 4: Custom Assemble Pipeline Component

I created a custom “assemble” pipeline component for generating SOAP headers and including them in the message context.

  • I created a custom pipeline component using the wizard.
  • I gave the pipeline component properties for Username and Password, and a property called AdditionalHeaderElementText to hold static content (blue above), to allow flexibility. I set this property to the following:
  • Note: If you set component values when creating a pipeline, then export bindings from the admin console, the pre-set values do not show in bindings. To avoid confusion, I recommend leaving property values blank when creating a pipeline, and manually setting them in the admin console. 
  • Here is my Execute method, which successfully generated the SOAP header for inclusion by the WSF adapter in the HTTP payload:
// http://www.microsoft.com/downloads/en/details.aspx?FamilyID=018a09fd-3a74-43c5-8ec1-8d789091255d&displaylang=en
using Microsoft.Web.Services3.Security.Tokens;

public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
  string namespaces = "xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\" " +
        "xmlns:wsa=\"http://schemas.xmlsoap.org/ws/2004/03/addressing\" " +
        "xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" " +

  string timestamp = String.Format("<wsu:Timestamp wsu:Id=\"Timestamp-{0}\">"
    + "<wsu:Created>{1}</wsu:Created><wsu:Expires>{2}</wsu:Expires></wsu:Timestamp>",
    Guid.NewGuid(), DateTime.UtcNow.ToString("s") + "Z", DateTime.UtcNow.AddMinutes(5).ToString("s") + "Z");

  var token = new UsernameToken(this.Username, this.Password, PasswordOption.SendPlainText); // http://blog.benpowell.co.uk/2010/11/supporting-ws-i-basic-profile-password.html

  string usernameToken = token.GetXml(new XmlDocument()).OuterXml;
  // remove namespace definitions since we'll define them in the <header> node below
  usernameToken = Regex.Replace(usernameToken, " xmlns:[^=]+=\"[^\"]+\"", "");

  string securityXml = "<wsse:Security s:mustUnderstand=\"1\">" + timestamp + usernameToken + "</wsse:Security>";

  string header = string.Empty;
  // add header fields from config
  header += this.AdditionalHeaderElementText;                 
header += "<wsa:MessageID>uuid:" + inmsg.MessageID + "</wsa:MessageID>"; 
  header += securityXml;
  header = String.Format("<headers {0}>{1}</headers>", namespaces, header);

  // Write the OutboundCustomHeaders property to the message context (distinguish it) so the WCF adapter will add these values to the SOAP header
  // http://blogical.se/blogs/mikael_sand/archive/2012/07/06/setting-custom-soap-headers-in-the-wcf-adapter.aspx     
  inmsg.Context.Write("OutboundCustomHeaders", "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties", header);

  return inmsg;
  • Note: I first tried promoting OutboundCustomHeaders property, but I ran into the “The property ‘propertyname’ has a value with length greater than 256 characters” error. Distinguishing the field instead of promoting it resolved the error. 


It would be ideal to upgrade WSE web services to WCF to improve performance. If, however, you are stuck calling a WSE web service and you want to use a BizTalk WCF send port, I hope this post helps. Please respond with any questions or comments.

NServiceBus & Second Level Retries

You may be familiar with how NServiceBus immediately re-tries transactions that fail, and the number of re-tries is configurable. This is handy in situations like when a deadlock is hit – the retry will likely succeed:


You are also probably aware that sometimes it takes a few moments for something else to happen in the system before our transaction will succeed. NSB now has a “second-level retry” feature that kicks in when  immediate retries are used up: NSB waits a configurable interval (10 seconds in the example below), then re-tries the transaction with a fresh set of immediate re-tries. This is incredibly useful in situations where multiple messages are coming in with dependencies on each other (though a Saga would be better here), or in situation where a dependent system is known to go down for a few minutes or hours. This feature could be used to keep error-queue counts down and reduce support burden:



ReturnMessageToErrorQueue is a command-line tool that ships with NServiceBus and returns all messages from an error queue to their source queues.

Unfortunately the directions are sparse, and my first attempt was a flop:
Please enter the error queue you would like to use:
Please enter the id of the message you'd like to return to its source queue, or 'all' to do so for all messages in the queue.

Unhandled Exception: System.Messaging.MessageQueueException: Format name is invalid.
at System.Messaging.MessageQueue.MQCacheableInfo.get_Transactional()
at System.Messaging.MessageQueue.get_Transactional()
at NServiceBus.Tools.Management.Errors.ReturnToSourceQueue.ErrorManager.set_InputQueue(Address value)
at ReturnToSourceQueue.Program.Main(String[] args)

Here is the correct syntax:

Please enter the error queue you would like to use:
Please enter the id of the message you'd like to return to its source queue, or 'all' to do so for all messages in the queue.

Attempting to return message to source queue. Queue: [myNsbApp.error@servername], message id: [all]. Please stand by.

Udi, please update the directions!

Are you publishing IMyObjectUpdated messages?

If you find yourself publishing a message called IMyObjectUpdated, you are likely violating service boundaries.

Why? Because publishing a message every time your object changes is a sure sign that your other services are saving representations of your object. In other words, you have duplicated data between services, and to stay in sync, you have to publish DTO-style messages every time your object changes. Data duplication indicates that your services are not fulfilling their role of being fully responsible for the business capability they implement.

Instead of publishing an IEmployeeUpdated message containing every field on the Employee entity, you should be publishing events like IEmployeeCreated, IEmployeeFired, and IEmployeePromoted. Each of these event messages should only contain the employee id and one or two other fields.

Pay close attention to how much data is contained in messages that get shared between services. If your services are publishing messages with more data than IDs and dates, then it is high time to re-evaluate your service design.

Distributed-Design Yahoo Group

Here is a link to the Yahoo group for alumni of Udi’s course. Peruse this group for practical SOA discussion material:


SOA – What is a Service?

Webservices often come to mind when we hear Service-Oriented Architecture: webservices implementing interfaces connecting a web tier, an application tier, and a data tier, or some variation. In the SOA world, the concept of a service is an entirely different animal.

An SOA “system” is composed of multiple autonomous “services” that communicate asynchronously. Each service has UI logic, business logic, and some means of storing data. Yes, you heard that right – services to not share a common data store; they are each responsible for their own data. Udi Dahan defines a service as, “the technical authority for a specific business capability”, and specifies that “all data and business rules reside within the service” (Advanced Distributed Systems Design course-slides).

Asynchronous communication in SOA consists of services publishing events to which other services subscribe. If  it appears that service A needs to synchronously access service B’s data as part of its business logic, then the service boundaries should be re-evaluated. It is probable that A and B are either  managing the same business capability and should be combined, or that we technologists have synchronously chained steps of a business process into a transaction, and we need to split up the steps add re-evaluate the role of time in the business process.

Attributes of a service

  • Business-centric
  • Technical authority a business capability
  • Stores all data needed for the business capability it owns
  • Communicates asynchronously with other services
  • Contains UI logic, business logic, and a datastore

Examples of services

System: Online book-sales website
Possible service breakdown:

  • Sales
  • Marketing
  • Customer Care

System: Hotel reservation system
Possible service breakdown:

  • Billing
  • Guest Services
  • Marketing
  • IT/Infrastructure


Disclaimer: I do not claim to be an expert in the subject of SOA, so these are just thoughts based on my experiences with building my first SOA system over the past year plus some theory assimilated from attending Udi Dahan’s Advanced Distributed Systems Design course.

More to come if I have time!