Tuesday, August 19, 2014

Some terms and definitions from Patterns of Enterprise Architecture, Martin Fowler

Words matter. Especially when discussing architecture with business analysts and stake holders. As Martin Fowler astutely points out, when it comes to software/system performance, several terms are used inconsistently.

Response time: From user’s perspective, It is the amount of time it takes for a system to process a request. It may be an UI action or API call.

Responsiveness: This is the amount of time the system takes to acknowledge a request. Let’s stop here and understand the difference between Response Time and Responsiveness. Last year I was involved in a fairly large project which had a complicated reporting process using very large data-sets and several rules. This example jumps out at me because the difference between Response time and Responsiveness of the system was significant here. We adopted a ‘Fire and Forget’ calling scheme from client end. The server would push status updates to client at regular intervals. In this architecture the responsiveness of the system was rather good. The server was able to respond and let the client know of its status. The total response time however was significant. An UI example would be – providing a progress bar during a file copy improves responsiveness of the system but not the response time.

Latency: Minimum time required to get any response, even if the work done is nonexistent. This becomes an issue where clients and servers are on physically separate machines. When everything is on the same machine and if the code is running correctly latency should be insignificant or nonexistent. A recommendation for systems running on separate machines – minimize remote calls.

Throughput: How much stuff a system can do over a given amount of time. Measurements of throughput - Bytes per second, transactions per second. The unit of measurement is contextual. The important thing is to talk in terms of measurements when talking about throughput.

Load: Measure of stress the system is under. Load is usually in context of other measurements, like response time. In fact, for large systems a plot of load against response time can give an interesting visualization of this aspect of performance. Such graphs often expose trends that might not have been obvious otherwise.

Efficiency: Performance divided by resources. I have always had qualms about using the term Efficiency when talking about performance of a system. Mainly because it seemed very abstract. An example in Patterns of Enterprise Architecture: A system that gets 30 tps on two CPUs is more efficient than a system that gets 40 tps on four identical CPU. Most appropriate, however on more than one occasion, customers use the term efficiency, response time and responsiveness interchangeably. Things can get confusing. When talking architecture to customers I feel safe sticking to Response Time and Responsiveness of a system.

Capacity: Maximum effective throughput or load. This might be the point below which performance is unacceptable.

Scalability: A system is scalable when more hardware increases performance (throughput). Vertical Scalability or scaling up means adding more resources to single server, like memory. Horizontal Scalability or scaling out means adding more servers.

“When building enterprise software systems, it often makes sense to build for hardware scalability rather than capacity or even efficiency. Scalability gives you the option of better performance if you need it. Scalability can also be easier to do. Often designers do complicated things that improve the capacity on a particular hardware platform when it might actually be cheaper to buy more hardware”
- Martin Fowler in his book Patterns of Enterprise Architecture

These terms come up frequently when talking about performance in software architecture. Notes from a reliable source like Patterns of Enterprise Architecture, is most useful and will act as a personal reference. Conversation about performance is always tricky and performance optimization is trickier. Having some clarity about the terms used to describe what requires tuning and what is lacking, will at least get the ball rolling.

Thursday, June 26, 2014

Earthy Distractions: The Human Touch

In a sense what we do at IDV is thematic mapping. Here, the goal is to create a thematic globe using population density data. The density data is obtained from Socio Economic Data and Application Center or SEDAC. I would like to share some of the challenges encountered in the process –

A density cube is located at every available (available in dataset) coordinate on earth. Lot of points! The height of a cube is determined by the density value at the point. With density distribution being rather skewed, I was called upon to apply a reasonable partitioning mechanism. Quantile distribution was the answer. For more information on distributions check out  Telling Truth. I have 9 buckets of different colors. The density data is almost equally distributed among 9 buckets.

The JavaScript library used to create this globe is THREE.Js. One frustrating fact about this open source library is that when latest version of the product is released, there is very little documentation on how to migrate an app running an old version, to a new one. Initially I developed this app on a rather old version of three.js, when I tried migration to the latest three.js ver67, at times my face turned black blue and red in frustration. Sigh! I like THREE js, it is fun, but there are some painful moments.

It takes a little time to load all the cubes on the globe. I probably could have done a better job with performance, but this is quick and dirty experiment done over lunch breaks, so please understand if I procrastinate improvements for now J


Well folks, this is the second of Earthly Distractions, hope you like what you see. The globe is transparent, I thought it gives an interesting visual experience. Please be patient, the visualization is bit sluggish while loading.