Wednesday, 21 October 2009

Clarifications

Perhaps it's easier to answer the questions posed (via emails [some bemoaning the lack of Google Wave]) via another blog entry. I'm going to collate the questions and answer them one-by-one.

Q1. What is a QA/ What is the purpose of this experiment? / What is being measured? And against what?

A1. QA refers to the quality attributes required/achieved by applications. The entire purpose of my thesis is to show that applications (in specific circumstances) can achieve the level of QA that they desire, without having control over the individual components that make up this application. Essentially, we're modelling an application as a composition of individual components, which can be bought and sold. Since each component is being created by an third-party, the effect on the total QA achieved by the application is dependant on the individual QAs exhibited by the components and interactions between them. In this particular experiment, we make the simplifying assumption, that interactions between components do not affect the total QA of the application. Hence, a summation of individual QA is enough to characterize the total QA of the application.
Measuring QA, in the general case, is difficult. Some QAs like security, usability, modifiability are subjective, while those like performance are easier to quantify and compare. Again, we make a simplifying assumption that there exists some mechanism to rank an individual component on a particular QA. Whether this is by honest disclosure or the auction house's benchmark test, is not relevant. What is relevant is that the numerical value assigned to a particular QA of a particular seller's component is comparable to another seller's component for that same QA.

Q2. What're components A, B, C, D etc? What does it mean to have relative weights amongst QA for these components?

A2. Each component implements a specific functionality and are only differentiable on the basis of QA. Hence, all sellers of component type A are perfectly substitutable.
An application could have a set of preferences amongst QA (performance = 0.7, reliability = 0.35, security = 0.5 etc). Note that a score on a particular QA is not comparable to a score on another QA, i.e., the application is not ranking performance as doubly important as reliability.

Q3. Can we have some standard DA terminology, please?
A3. (a) Trade price is selected using k-pricing with k = 0.5
(b) We're operating a Clearing House instead of a continuous DA (though I'm unclear as to how this differs if clearing happens in every iteration)
(c) Trading strategy is as close to ZIP as possible, but with modifications for expressing QA preferences

Q4. What do you mean by 'fitness function'? Or 'profitability'? Is it the same as 'profit'?
A4. Yes. I used the phrase 'fitness function' to drive the analogy between market-based computation and evolutionary improvement. Every iteration sees the removal/modification of some individuals's bids and the 'copying' of some other individuals's bids. I just wanted to see these as survival in a market. Yes, profitability and profit are used interchangeably.

Q5. What's the distribution of buyers's weights amongst the QA? Sellers's?
A5. In the absence of any intuition about any particular skewness being more 'normal', a normal distribution is what I was going with.

Q6. Why would a buyer be honest and bid with its limit price? Is there a market mechanism in place that makes it advantageous?
A6. My initial response is that I'm playing fast and loose with terminology. I mean shout_price instead of limit price. How is this shout_price decided? I haven't thought this through. I haven't even though through, if being honest is something I want to build into my market mechanism.

Q7. Are QA attributes-values estimations of some probability describing whether some event occurs or not?
A7. No, they're simple values, for now. When I model reputation-driven markets, I could probably use probability-distributions ;-)

Wednesday, 14 October 2009

Experiment description

Hypothesis: Using profitability as a fitness function will allow a majority of the population to reach their intended level of QA without any centralized control

Starting condition:
  1. A population of applications that have differing levels of QAs, they'd like to achieve.
  2. Each application has multiple sub-components that do not know the higher/parent component's QA level (to model global/higher level info, that is not percolated down to lower levels)
  3. Markets exist for the sale of following types of components: A, B, C, D, E, F, G, H
  4. Each application has/needs a subset of the components available
The basic flow is as follows:

  • Application gets a relative weight amongst QA attributes
For each iteration:
  • Application gets a budget to spend
  • Application distributes budget amongst each of its components' trading agent
  • From the buyers perspective:
  1. Each agent creates a bid consisting of: limit price, QA attribute-levels. Each QA attribute is modelled as a value on a likert scale (0 - 10)
  2. The Expected Value of the agent is the summation of the QA attribute-values in the bid
  3. The Actual Value of the agent is the summation of the QA attribute-values after the trade
  4. If no trade happens (due to a very low limit price or too high QA value), actual value is zero (hence, incentive to trade)
  • From the seller's perspective:
  1. Each agent trades in one market (Agents that trade in multiple markets are not considered yet)
  2. Each agent creates an 'Ask' consisting of: asking price, QA attribute-levels. Modelling of values is same as above.
  3. Each agent has a privately known cost function. It's asking price will never be lower than the the cost function.
  • From the market's perspective:
  1. A set of buyers and sellers submit 'bids' and 'asks'.
  2. All those that can be matched are considered 'trades'
  3. The price at which trade is made is called 'trade_price'. For closest matching QA, trade_price = (limit_price + ask_price / 2)
  4. Market clears after making all possible 'trades'
  • Calculation of profits is as follows:
  1. Real_Value + (normalized) savings
  2. Real_Value = Actual_Value - Expected_Value
  3. savings = budget - trade_price
  4. Savings and Value can be negative, hence profits can be negative.
  • How profits constitute feedback:
  1. Sub-components with an Actual Value closer to the parent's Expected Value are given a proportionally higher share of the next budget
  2. Sub-components with positive savings are given a proportionally higher share of next budget
  3. Once sub-components get their budget for the next iteration, they can deduce whether their Actual_Value was closer to desired outcome or Savings or both.
  4. They adjust their next bid appropriately
Rinse, repeat for 'x' iterations.

Wednesday, 29 July 2009

Completing the feedback loop

Assuming that we have a multi-attribute CDA in place to allow for buyers and sellers to interact, what causes the buyer to come back after one transaction?

a) When the utility function that monitors the value from the purchase becomes negative, the buyer needs to come back. That is, if an app bought an image-manipulation ws with a stated performance rating of 10AMM (Above Market Minimum) for £x +£y(market entry cost), it expects a value of at least £x + £y to be delivered. Upon composition and monitoring, if the value that the app receives is less than £(x+y), then the app will consider returning to the market. But there are two possible outcomes: the app goes back and bids for more than 10AMM or the app will try to buy 10AMM for less then £x

How do we figure out what's optimal for the app?

Kinds of quality attributes

There are at least two types of quality attributes in an architecture:

1) Those that can be 'designed in' at design time: These are effected at design. Modularity, testability are examples of this

2) operational quality: These can be measured only at runtime. During design time, one can only hypothesize or simulate their values. Performance and availability are examples of this

In my work, I'm going to look at operational QAs. Design-time QAs are best manipulated at design time, but operational QAs can only be guesstimated. Therefore, the maximum benefit is obtained when operational QAs are adapted based on actual, in-the-field situations.

Friday, 24 July 2009

While creating bids, the essential attributes are:

1) Time for which service will be provided
2) Renewal, whether automatic or not
3) functionality - how do we denote this for automatic recognition
4) QA - direct - performance on a known data set / per sec
5) QA - indirect - number of locations occupied by web-svc instances (reliability, for instance)

Tuesday, 14 July 2009

I've decided on the broad thrust of my research. I'm looking at self-organisation of web-applications on the cloud, to ensure quality attributes.

That is, I shall propose a method for architects to create web-applications, which connect to various web-services, based not only on functionality offered but also quality attribute targets.

What does this mean? I envision, a marketplace where web-services bid for and sell functionality. However, connecting to a component (inside a web-service) is not an easy decision to make. Not all components are created equal. Some components exhibit high-performance and scalability, while others exhibit dependability. Still others might exhibit a secure layer. Depending on which Quality Attribute the web-application is looking for, it will connect to the web-service that offers that particular one.

Connecting and use of a web-service cannot be gratis. This is where the marketplace comes in. If we imagine a set of WS buyers and sellers, then the following conditions are necessary for an efficient marketplace to exist:

1) There should be an efficient matching of buyers and sellers
a) Ex-post individual rationality should hold
b) Matching should happen quickly - low order polynomial time

2) Matching of buyers and sellers must be done on multiple attributes
3) The market should have some incentive to be present, id est Both the buyer and seller will have to pay to participate in the market.
a) The buyer will have to enter the market with a frequency that is directly proportional to the frequency with which its focus (or chosen QA) changes. So, if the application changes its preferred QA with a high frequency, then the cost of entering the market will have to factored into the budget that it is willing to spend on acquiring a web-service. On the other hand, if the frequency of change is low, then the cost of entering the market could be seen as a one-time fixed cost.
b) Allocative efficiency should be high. The total utility of the matching should be high enough, for buyers and sellers to continue to stay in the marketplace. Else, they would leave the market and try to deal outside.

A multi-attribute auction allows for (2), while a continuous double auction exhibits (3_b). I wonder if a combination of the two would be suitable for my purpose.

What're the modelling aspects to take care of?

1) Web-apps would have to be strictly monotonic in their preferences for QAs
2) A notation for indicating QAs would need to be evolved.
3) A notation for indicating functionality would need to be evolved.

Thursday, 25 June 2009

I've got a position paper accepted at WICSA '09(http://www.wicsa.net/) and am currently busy preparing the camera-ready copy. Apart from addressing the issues raised by the reviewers, my current issues to address are:

1. What exactly am I proposing? Is it a method to architect applications in the cloud? Is it a method to architect the cloud? Is it a method to evaluate tradeoffs between cost and dependability of applications?

2. Why is self-optimization important? What role does the market play (since I'm using economics as a part-driver)?

2.1 What's the deal with multi-attribute auctions? If they're so good, why don't people use them?

3. What kind of a motivating example or case study can I use? How do I prove (really prove, using mathematics) my thesis?

4. What's the evaluation mechanism?

Thursday, 12 February 2009

Things to clarify:

what's the difference between cloud and grid?

Where would result of self-optimization research be used? Pre-deployment? Post-deployment?

Does the publish/subscribe paradigm and Service Oriented Architecture fit in with cloud? How're they related?

Thursday, 8 January 2009

mechanism to express adaptation

How does one express adaptation in software architecture? What's the DNA? Do we use the component-connector-topology permutations? Or is there another mechanism possible?

Would add-C/C/T? and delete-C/C/T be enough?

The more that I think about it, the natural way of adaptation where multiple organisms evolve in different ways and the fittest survive doesn't seem feasible. No practical application will have the luxury of trying out multiple solutions till the best one emerges. Each particular adaptation that the application makes will have to be incrementally better or rolled back immediately.

What does this mean for the monitoring mechanism? Will it necessarily have to be like the one described in [1]? Without layers, would it be impossible to have something sensible and workable?


[1] @inproceedings{citeulike:1840134,
address = {Washington, DC, USA},
author = {Kramer, Jeff and Magee, Jeff },
booktitle = {FOSE '07: 2007 Future of Software Engineering},
doi = {http://dx.doi.org/10.1109/FOSE.2007.19},
isbn = {0769528295},
keywords = {architecture, self-organising},
pages = {259--268},
publisher = {IEEE Computer Society},
title = {Self-Managed Systems: an Architectural Challenge},
url = {http://dx.doi.org/10.1109/FOSE.2007.19},
year = {2007}
}

Wednesday, 7 January 2009

starting from scratch -- almost

The christmas hols (and the associated lack of work done) have put paid to any continuity of thought. The only thing I've done is track down three PhD theses that are pretty close to my idea of what I'm doing.

The first one is from CMU (I might even have had coffee with this guy once):
Rainbow: Cost-effective software architecture based adaptation by Shang-Wen Cheng, Ph.D, Carnegie Mellon University, 2008

Supporting architecture- and policy-based self-adaptive software systems
by Georgas, John C., Ph.D., University of California, Irvine, 2008

Service clouds: Overlay-based infrastructure for autonomic communication services
by Alam-Samimi, Farshad, Ph.D., Michigan State University, 2007

I've gotten in touch with Cheng and John Georgas, but I'm unable to get in touch with Farshad to get a copy of his PhD thesis.

Of course, getting a copy of the thesis isn't the same as reading, absorbing and understanding it. But it's a start.