Perhaps it's easier to answer the questions posed (via emails [some bemoaning the lack of Google Wave]) via another blog entry. I'm going to collate the questions and answer them one-by-one.
Q1. What is a QA/ What is the purpose of this experiment? / What is being measured? And against what?
A1. QA refers to the quality attributes required/achieved by applications. The entire purpose of my thesis is to show that applications (in specific circumstances) can achieve the level of QA that they desire, without having control over the individual components that make up this application. Essentially, we're modelling an application as a composition of individual components, which can be bought and sold. Since each component is being created by an third-party, the effect on the total QA achieved by the application is dependant on the individual QAs exhibited by the components and interactions between them. In this particular experiment, we make the simplifying assumption, that interactions between components do not affect the total QA of the application. Hence, a summation of individual QA is enough to characterize the total QA of the application.
Measuring QA, in the general case, is difficult. Some QAs like security, usability, modifiability are subjective, while those like performance are easier to quantify and compare. Again, we make a simplifying assumption that there exists some mechanism to rank an individual component on a particular QA. Whether this is by honest disclosure or the auction house's benchmark test, is not relevant. What is relevant is that the numerical value assigned to a particular QA of a particular seller's component is comparable to another seller's component for that same QA.
Q2. What're components A, B, C, D etc? What does it mean to have relative weights amongst QA for these components?
A2. Each component implements a specific functionality and are only differentiable on the basis of QA. Hence, all sellers of component type A are perfectly substitutable.
An application could have a set of preferences amongst QA (performance = 0.7, reliability = 0.35, security = 0.5 etc). Note that a score on a particular QA is not comparable to a score on another QA, i.e., the application is not ranking performance as doubly important as reliability.
Q3. Can we have some standard DA terminology, please?
A3. (a) Trade price is selected using k-pricing with k = 0.5
(b) We're operating a Clearing House instead of a continuous DA (though I'm unclear as to how this differs if clearing happens in every iteration)
(c) Trading strategy is as close to ZIP as possible, but with modifications for expressing QA preferences
Q4. What do you mean by 'fitness function'? Or 'profitability'? Is it the same as 'profit'?
A4. Yes. I used the phrase 'fitness function' to drive the analogy between market-based computation and evolutionary improvement. Every iteration sees the removal/modification of some individuals's bids and the 'copying' of some other individuals's bids. I just wanted to see these as survival in a market. Yes, profitability and profit are used interchangeably.
Q5. What's the distribution of buyers's weights amongst the QA? Sellers's?
A5. In the absence of any intuition about any particular skewness being more 'normal', a normal distribution is what I was going with.
Q6. Why would a buyer be honest and bid with its limit price? Is there a market mechanism in place that makes it advantageous?
A6. My initial response is that I'm playing fast and loose with terminology. I mean shout_price instead of limit price. How is this shout_price decided? I haven't thought this through. I haven't even though through, if being honest is something I want to build into my market mechanism.
Q7. Are QA attributes-values estimations of some probability describing whether some event occurs or not?
A7. No, they're simple values, for now. When I model reputation-driven markets, I could probably use probability-distributions ;-)
Wednesday, 21 October 2009
Wednesday, 14 October 2009
Experiment description
Hypothesis: Using profitability as a fitness function will allow a majority of the population to reach their intended level of QA without any centralized control
Starting condition:
Starting condition:
- A population of applications that have differing levels of QAs, they'd like to achieve.
- Each application has multiple sub-components that do not know the higher/parent component's QA level (to model global/higher level info, that is not percolated down to lower levels)
- Markets exist for the sale of following types of components: A, B, C, D, E, F, G, H
- Each application has/needs a subset of the components available
- Application gets a relative weight amongst QA attributes
- Application gets a budget to spend
- Application distributes budget amongst each of its components' trading agent
- From the buyers perspective:
- Each agent creates a bid consisting of: limit price, QA attribute-levels. Each QA attribute is modelled as a value on a likert scale (0 - 10)
- The Expected Value of the agent is the summation of the QA attribute-values in the bid
- The Actual Value of the agent is the summation of the QA attribute-values after the trade
- If no trade happens (due to a very low limit price or too high QA value), actual value is zero (hence, incentive to trade)
- From the seller's perspective:
- Each agent trades in one market (Agents that trade in multiple markets are not considered yet)
- Each agent creates an 'Ask' consisting of: asking price, QA attribute-levels. Modelling of values is same as above.
- Each agent has a privately known cost function. It's asking price will never be lower than the the cost function.
- From the market's perspective:
- A set of buyers and sellers submit 'bids' and 'asks'.
- All those that can be matched are considered 'trades'
- The price at which trade is made is called 'trade_price'. For closest matching QA, trade_price = (limit_price + ask_price / 2)
- Market clears after making all possible 'trades'
- Calculation of profits is as follows:
- Real_Value + (normalized) savings
- Real_Value = Actual_Value - Expected_Value
- savings = budget - trade_price
- Savings and Value can be negative, hence profits can be negative.
- How profits constitute feedback:
- Sub-components with an Actual Value closer to the parent's Expected Value are given a proportionally higher share of the next budget
- Sub-components with positive savings are given a proportionally higher share of next budget
- Once sub-components get their budget for the next iteration, they can deduce whether their Actual_Value was closer to desired outcome or Savings or both.
- They adjust their next bid appropriately
Subscribe to:
Posts (Atom)