Perhaps it's easier to answer the questions posed (via emails [some bemoaning the lack of Google Wave]) via another blog entry. I'm going to collate the questions and answer them one-by-one.
Q1. What is a QA/ What is the purpose of this experiment? / What is being measured? And against what?
A1. QA refers to the quality attributes required/achieved by applications. The entire purpose of my thesis is to show that applications (in specific circumstances) can achieve the level of QA that they desire, without having control over the individual components that make up this application. Essentially, we're modelling an application as a composition of individual components, which can be bought and sold. Since each component is being created by an third-party, the effect on the total QA achieved by the application is dependant on the individual QAs exhibited by the components and interactions between them. In this particular experiment, we make the simplifying assumption, that interactions between components do not affect the total QA of the application. Hence, a summation of individual QA is enough to characterize the total QA of the application.
Measuring QA, in the general case, is difficult. Some QAs like security, usability, modifiability are subjective, while those like performance are easier to quantify and compare. Again, we make a simplifying assumption that there exists some mechanism to rank an individual component on a particular QA. Whether this is by honest disclosure or the auction house's benchmark test, is not relevant. What is relevant is that the numerical value assigned to a particular QA of a particular seller's component is comparable to another seller's component for that same QA.
Q2. What're components A, B, C, D etc? What does it mean to have relative weights amongst QA for these components?
A2. Each component implements a specific functionality and are only differentiable on the basis of QA. Hence, all sellers of component type A are perfectly substitutable.
An application could have a set of preferences amongst QA (performance = 0.7, reliability = 0.35, security = 0.5 etc). Note that a score on a particular QA is not comparable to a score on another QA, i.e., the application is not ranking performance as doubly important as reliability.
Q3. Can we have some standard DA terminology, please?
A3. (a) Trade price is selected using k-pricing with k = 0.5
(b) We're operating a Clearing House instead of a continuous DA (though I'm unclear as to how this differs if clearing happens in every iteration)
(c) Trading strategy is as close to ZIP as possible, but with modifications for expressing QA preferences
Q4. What do you mean by 'fitness function'? Or 'profitability'? Is it the same as 'profit'?
A4. Yes. I used the phrase 'fitness function' to drive the analogy between market-based computation and evolutionary improvement. Every iteration sees the removal/modification of some individuals's bids and the 'copying' of some other individuals's bids. I just wanted to see these as survival in a market. Yes, profitability and profit are used interchangeably.
Q5. What's the distribution of buyers's weights amongst the QA? Sellers's?
A5. In the absence of any intuition about any particular skewness being more 'normal', a normal distribution is what I was going with.
Q6. Why would a buyer be honest and bid with its limit price? Is there a market mechanism in place that makes it advantageous?
A6. My initial response is that I'm playing fast and loose with terminology. I mean shout_price instead of limit price. How is this shout_price decided? I haven't thought this through. I haven't even though through, if being honest is something I want to build into my market mechanism.
Q7. Are QA attributes-values estimations of some probability describing whether some event occurs or not?
A7. No, they're simple values, for now. When I model reputation-driven markets, I could probably use probability-distributions ;-)
Wednesday, 21 October 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment