This might be a dumb question, since my understanding is limited to the article, but here goes.
Let's say we have the situation in the diagram, we have a two policy options, "A" and "B", and we're betting on "if we adopt this policy, will the GDP reach the target?". Let's say that currently A is unpopular, so "yes if A" has a low price, and B is popular. Now imagine an oracle enters the market, who knows that A actually has the best chance of reaching the target, but B also has a good shot. In fact, they know that "yes on A" and "yes on B" are both underpriced - just "yes on A" moreso. Ideally, we would like this oracle to put their money on "yes on A". But if they don't have enough capital to change which market leads, they'll break even (by having their money returned when B wins) if they do that. Instead, they should bet on B, which they know to be a worse option, because it'll actually fire and they'll still make some profit. The market doesn't get to learn all the information that the oracle has.
Is there some way to structure the markets so that our oracle instead bets on A?
One improvement might be to make the total capital that investors have access to into public information.
That way the market should judge bets based not on their absolute value, but on the degree of risk that the bettor is willing to take on. An oracle betting 100% of their capital on A should be treated as a maximally strong signal regardless of the size of said capital. Now, if even that signal isn't enough to move investors away from B... well, you can't really stop them without turning the system into an aristocracy.
Of course, ensuring that the total capital information is accurate is going to be complex - how do you prevent rich people from creating 'proxy investors' who bet 100% of their borrowed funds? - but it seems on a similar order of magnitude of difficulty as 'prevent people from insider trading'.
Unless the two choices are extremely close, we probably don't want it to be the case that a single investor, even with 100% of their funds, can single-handedly change the outcome. We just want each investor to be incentivized to put their money behind what they believe to be the best policy, and hope that collectively they choose right.
Thus, we should expect that even if our oracle knows with certainty that A is the best policy, and invests all their money into it, they're unlikely to be able to unilaterally change which policy will actually be implemented. They will therefore be incentivized to bet on B succeeding (which is also a winning bet), and now that we're revealing that they bet 100% of their capital on it, all we've done is magnified the signal of that bet. But this is bad - our oracle is betting in a way that moves us away from the policy they know is best.
Most of the prediction markets I've seen use sets of binary options that are complete an mutually exclusive. The entity that handles the market would only sell you for, say, $1, a complete set of those binary options.
So you could not buy "A", you could buy a pair made of a copy of "A" and a copy of "not A". If the oracle knows that "A" is true with p ~ 1, they would know that "A" is underpriced (it should be ~$1) and "not A" is overpriced (it should be ~$0), so they would buy ~infinitely many pairs of "A" and "not A".
I don't think that matches what they're describing in the article. The problem is that we only get to try implementing one of policy A and policy B, so if someone bets "I think policy A will achieve the goal", but we implement policy B, you have to just void their bet.
If we had already decided on policy A, and were just trying to predict whether it'll work, what you describe would be fine. But in the article, we're trying to decide whether to implement policy A or policy B, by having two separate markets, one for "what will happen if we do A" and another for "what will happen if we do B", and one of those two will get voided.
Unless I unnderstood it wrong, you could use a variant of that:
Market 1 has options A, not A. Market 2 has options B, not B. At the end of the trading period, void the "losing" market and reward the winning one. It's trivial to implement if you're using e.g. electronic payments and you forbid "cross" trading between A and B.
Right, that's the system they propose. But I'm saying that can result in an agent being incentivized to put their money into a policy they believe to be worse, as long as they believe that policy is underpriced and more likely to "win", which is undesirable.
So for example, suppose we have the objective "increase our production of paperclips by next year". Our two policy options are "build a paperclip factory", and "build a paper mill". We now have two bettings markets, each with a Yes/No pair of options, "Will building a paperclip factory increase our production of paperclips?" and "Will building a paper mill increase our production of paperclips?".
Now let's say that currently, the paper mill has "Yes, this will work" at 60%, and the factory has "Yes, this will work at 40%". I'm a paperclip genius, and I know that the true odds are that the factory has a 90% chance of working, and the mill has a 75% chance of working.
Where do I put my money? Ostensibly, we want me to put it on the factory, because that's the best policy. But the factory is unpopular and that policy is unlikely to be implemented (since it's down by 20 percentage points). Even if I nudge it up a bit, my bet is likely to be voided, and I make zero return for my knowledge. Instead, I will bet on "yes the mill will work", because that market is also underpriced, and the policy will actually be implemented. By doing this, I maximize my expected reward, and I also move us away from what I think is the best policy.
> But the factory is unpopular and that policy is unlikely to be implemented (since it's down by 20 percentage points). Even if I nudge it up a bit, my bet is likely to be voided, and I make zero return for my knowledge.
I'm not sure that's what would actually happen unless you add some weird constraints. Under usual (unrealistic, okay, but just for the sake of argument) assumptions, you would buy infinitely many As at any price <.9, and infinitely many Bs at any price <.75. By definition you know the true odds, so your posterior predictive has zero hyperparameter variance: every single one of those trades has positive expectation.
Both A and B would increase in price, but you would stop buying B after a while. Assuming infinite time, infinite liquidity, no budget constraints and no weird information asymmetries, you could single-handedly make the market converge at their "true" values: you will always buy A if the price is lower than your threshold, and any rational seller who doesn't believe your odds would sell it to you.
Certainly that's true if I have infinite capital, but if these markets require participants to have infinite capital in order to work, we've got a problem. If I have finite capital, then any money I put on the factory is money I can't put on the mill, and that's losing value.
Actually, if the options are mutually exclusive and bets on the losing option get voided, there's no reason to forbid you from betting on both options with the same money, is there? Only one bet will stand.
For that matter, anyone could safely lend you X, where X is what you already bet on A, for the purpose of betting on B. One way or another you'll get X back in voided bet money, so you're a perfectly safe borrower.
Typically in a betting market, you can continue to buy and sell your shares after placing your bet, so if the market moves and you now think that A is overpriced, you can sell some of your shares and lock in profit. It's not entirely clear to me how you make this work if your investment in A and B is with mirrored funds. If there's a way to make it work, it certainly seems like a step in the right direction.
If you have a budget constraint, it's rational for you to buy argmax(true(A) - market_value(A), true(B) - market_value(B)), which is exactly the Pareto-efficient behavior.
That's where I disagree. Your expected value on buying A is (probability A is implemented) * (true(A) - market_value(A), and similarly for B, because your receive zero return if the thing you bet on is not implemented. Thus, even if A is badly mispriced, you may not want to buy it if it has very low probability of being implemented.
Let's say we have the situation in the diagram, we have a two policy options, "A" and "B", and we're betting on "if we adopt this policy, will the GDP reach the target?". Let's say that currently A is unpopular, so "yes if A" has a low price, and B is popular. Now imagine an oracle enters the market, who knows that A actually has the best chance of reaching the target, but B also has a good shot. In fact, they know that "yes on A" and "yes on B" are both underpriced - just "yes on A" moreso. Ideally, we would like this oracle to put their money on "yes on A". But if they don't have enough capital to change which market leads, they'll break even (by having their money returned when B wins) if they do that. Instead, they should bet on B, which they know to be a worse option, because it'll actually fire and they'll still make some profit. The market doesn't get to learn all the information that the oracle has.
Is there some way to structure the markets so that our oracle instead bets on A?