I've riding the RP leaderboard as #1 on GlobalCrowd for over 3 months now, so I figured I'd give a status report:

So first let's talk about the red:

Sports, Health, Social:
This is the pop culture category: who is going to win the superbowl, who will win the country music awards, things like that. I didn't do any serious analysis on these types of missions. Not even much work on basic statistics. Interestingly enough my results are close to chance.

Military & Security:
These missions have you predict troop movements, refugee number prediction, invasion timing, causality numbers, prosecution for violations of humans rights issues, that sort of thing.

I relied heavily on 4GW theory for this, along with Analysis of Competing Hypothesis and analysis of history and past trends. Geo-politics has been extremely unstable recently with North Korea, Syria, Libya, the Congo, Sudan, Mali & Iran all seeing enhanced conflict.

A few highlights:
*Always pay attention to what kinds of forces armies are composed of. The ECOWAS forces in Mali for instance is made up almost entirely of Malian troops, so it essentially just the Malian army by a different name. This is important to note when French troops say they are passing command over to ECOWAS, it essentially just means it's going back to the status quo of poorly armed and trained Malian fighters and it's corrupt officer corps.

*There are always more refugees than are officially registered. Being officially registered means you get a shot at aid, the real number of refugees is double or more the registered number. If you want to predict a number over time always look to which countries are already saturated with refugees and if there are neighbors who might open their borders. Keep in mind that refugees likely won't cross conflict areas so that will control which countries they flee too as well. The longer a conflict draws on the less savings/credit refugees will have to draw on so you will see a skyrocketing number if it draws on for several months.

*Pay close attention to tribal/ethnic/religious faultlines, as these are usually how conflicts can be mapped out. Access to OSINT in conflict zones in Africa and the Middle East is extremely sparse, so extrapolating from those faultlines is a must. Investigate people first, then scale out to events, then to ideology.

*Details matter as always. Iran, for instance, has a growing radiomedicine business. That's what the 20% enriched uranium is being used for, they would have to use most of their supply in order to actually build a bomb at current capacity. Supposedly they have also created new types of radiomedicine, though I haven't looked deeply into what they made yet. However that doesn't mean they don't want to build a bomb, remember that Iran's neighbors Israel, Pakistan and India all have their own nuclear weapons. They have also exchanges materials and scientist with North Korea on a limited basis, though sourcing that information is extremely difficult for obvious reasons.

*In many ways operating in an extremely volatile environment with limited access to information makes OSINT nearly impossible.

Science & Technology:
They don't have a lot of these kinds of missions. Progress in science and technology is actually extremely vague and difficult to measure. You may have one GPU that is released and is technically better than the competition, but in reality it's performance is only theoretically higher. Some technology is specialized and there aren't any decent comparisons to make. It's only when there is a wide gap that it becomes obvious which choice was right or wrong.

When I bet on say, SpaceX having a string of successful launches, am I betting on SpaceX's engineers as well as the interpersonal relationships and the structure of the organization of SpaceX more than a given technology. How agile is the organization? How well is their rapid-prototyping structure? What is the incentive system, does it focus on quantity instead of quality?

For example, if I have a staff of coders, do I reward them for the number of bugs that they fix? To what degree do their managers understand whether a bug fix was important or not? Was it just a band-aid fix that can be gamed, or are the managers so inept as to let programmers put bugs into code just so that they can collect more bonuses? Remember that in large structures the appearance of doing work is always more important than doing work.

Business & Economics:

Much of this is a hold over from what I use on Science & Technology and Politics & Policy.

*For predicting something like the FB IPO, you should know that the media loves to tear into a company whenever it does an IPO. The company cannot talk about it's IPO, but media can speculate and lead with headlines in the forms of questions, all business journalism is soft journalism. They cannot possibly sum all of the factors in a meaningful way. Only pay attention to the structure, what message they want to give you, completely ignore the details.

*Look first at the people. Who is the CEO, where was the CEO educated, what kind of social network do they have? Can you listen to some speeches by the CEO and other staff? Look up information on GlassDoor, pay more attention to negative reviews.

*How do they present themselves? Is it a marketing company, a product company, or a service company? Do they make their money off of B2B, consumer or government items?

Groupon and Zynga are marketer companies, built on a fad's that were never entirely productive or useful for the ecosystem. Their products and services require boosting sales, once they've maxed out on adopters and burned enough people the fad will burn out. They boom and bust in about 3 years time. Zynga's innovative strategy was to make shitty games that are copies of other games, then panic when sales decline and try to figure out ways to make more people hooked on them and make removing billing very difficult. Watch out for companies pretending to be useful that only really exist as status symbols.

Apple was a mix of marketer and product company. They had a solid string of early inventions, the Macbook Air was the first laptop with wifi, their computers weren't as customizable but easy to use and made of higher quality materials than Dell computers for example. There were earlier prototypes of iPhone or iPad like devices, but the screen resolution sucked, the processing speed sucked (2002 processor in a handheld tablet?!), the battery life was horrible, their features were limited or entirely too costly for what little benefit they gave and didn't have reliable wireless. Quality control issues were overcome, and designs were made more aesthetically pleasing, Apple lost it's advantage.

To counter this Apple began courting journalists. Soft bribes are routinely passed around, free swag, paid vacations disguised as conferences, that sort of thing. Building on some truth and a lot of bullshit the Apple image was formed into the psuedo-cult that now exists. Apple also started biting the hand that fed them by putting more and more restrictions on developers, appstore-developer relations can be very fickle.

*Companies that give services to the government exist for reasons of politics as much as anything, find the constituents and the representatives. Ignore market logic and use political models. What does the social network look like? What is the bandwidth of communication and the strength of the connections? Do you have time to create the nodes and edges in Gephi or run them through R? Look at Organizational Analysis by Daniel McFarland on coursera for more.

*As always, use multiple models to judge things. Convergence of multiple models should be watched for.

*What are the demographics of the customers? This is as much an exercise in listening as anything else.

*How mature is the industry, e.g. newspapers versus computer manufacturers?

*Where is the company's products on the Gartner scale?

*Do they rely on early adopters?

*Have they focused on something like optmizing performance at the expense of other areas, leaving a huge gap for a small start-up to exploit? (payment/financial services/law/healthcare start-ups take advantage of this combined with the fact that people see those fields as being too difficult/regulated to create businesses in)

*If you're stuck on something like a stock price, make 3 guesses about where the price could go. Make an upper bound and lower bound for each of those 3 guesses so that you have 9 numbers in total, average those numbers and bet on that range. Start making empirical guesses on the probabilities for events, then investigate when you get them wrong.

*Do the numbers that they release add up?

*How many layers deep and wide did you go into your analysis to make your judgements? Did you connect things from different specialist areas that the majority of people will not connect? Try to make some categories for each set of steps, like fact-checking statistics, running your own statistical analysis, applying models, and see how deep or wide your analysis has gone.

*Look again at the ties between industries and customers, can it be bailed out, e.g. like with banks and the co-risk feedback model? If not, remember that actual events tend to be more extreme and volatile than we can predict.

*Equities markets and start-up's tend to have fat-tails. Very big winners and lots of losers, usually the very big winners don't have a high degree centrality in the beginning. Their ideas may go against the grain or focus on an area of the market that people ignored. As time goes on they gain more connections in the network and their degree increases and they become central.

*Technical Analysis and mathematical models for the market can work solely because people believe it does.

*When building a Bayes model, never assign a probability as zero unless it really is impossible. The variables multiply off of each other and so it will always reduce an event's probability to zero, which has far reaching consequences.

Politics & Policy:

*When debating who will win an election, remember that the media creates candidates for the sole purpose of tearing them down later. See: Herman Cain.

*Look at who is embedded inside of the current bureaucracy. In Egypt many of the same people from the Mubarak regime, e.g. Judges, are still holding their posts, along with people affiliated with the muslim brotherhood. Usually when there is a large change of politics it is only the appearance of such, actual change is typically measured in terms of decades. Same thing with Russia, look at Putin, then go deeper and analyze the bureaucracy that have held power for decades.

*Remember that politicians believe their own bullshit.

*This QZ article has some other useful pieces:

http://qz.com/40960/the-14-rules-for-pr ... al-events/

*Learn the subtle differences in how a given government elects or pretends to elect people e.g. the difference between the electoral college in the US versus the coalition system in Israeli politics. No country is just "a democracy".

*Divisions can sprout naturally from people trying to do or be the same thing in a slightly different way. See: Religion, programming languages, ethnicity. Bonus points if the ideology functions as an operating system for the mind, like programming languages and religion.

*Most love is conditional not unconditional so it divides people. Hate unites, anger is the most viral emotion humans have.

*The CIA factbook is always better than wikipedia.

*Watch out for politicians that have something to prove. Maybe 80% of "troop mobilizations" are just scare-mongering shows to distract the citizens of their own countries from internal problems. The other 20% is the gray area of political narcissism, profiteering, stupidity and accidents. We almost blew up the world a few times over computer glitches that showed nuclear missiles incoming, both on the US and Russian side:



https://byustudies.byu.edu/PDFLibrary/2 ... 4bafc0.pdf

*Break regions and demographic groups down. Consider the South Korean and Israeli examples:

*Get on IRC or forums and talk to people from the country in question.


So what stands out is that actual analysis tends to be done very quickly and haphazardly. Models and rigor aren't given much respect in practice. Models are important because it allows you to track how you do things and make improvements in your decision making. In the same way most programmers don't follow documentation procedures or other common sense practices, analysts don't chart out a good analysis of competing hypothesis. Another large problem is giving too much weight to statistics, start-up's pad their numbers or exaggerate their innovations, Spetsnaz training may not be all it's cracked up to be and many financial econometrics exist for the purpose of getting around regulations or managing perceptions by creating obscurity rather than informing an analyst to reality. Many analysts will take up statistics and facts that were conjured from thin air.

At the lowest level, of course, is the raw intelligence report. This report is generally extraordinarily well evaluated and supported. No scholar could really, within the normal limits of national security, ask much more. The source, particularly in CIA-originated reports, is carefully and intelligently described as to his professional knowledge and competence, his outlook, his opportunity to gather the information, and his previous reliability. Not only the date of acquisition of this information but place as well is given. In some reports the rapporteur also provides a field evaluation of the substantive information elicited from the source. The user of this kind of report can easily and effectively apply the canons of evidence in evaluating and testing the information.

But as we move up the ladder of intelligence reports the documentation gets sparser. The NIS (National Intelligence Summary), to use a well-known example, is in effect a scholarly monograph, digesting a great multitude of raw reports. Its total documentation usually consists of a single, very brief paragraph commenting on the general adequacy of the source material. No individual item within the NIS section can be tracked down to a particular source or specific group of sources. As one moves in the NIS from the individual chapter sections to the overall brief, the documentation becomes even more general and less meaningful.

At the more exalted level of the NIE (National Intelligence Estimate), documentation even in the generalized form of comments on sources has usually disappeared altogether. One is forced to rely on the shadings given to “possibly,” “probably,” and “likely” and on other verbal devices for clues as to the quantity and quality of the basic source data. These examples from the NIS and NIE are paralleled in a great many other publications of similar refinement. One may admire the exquisite nuances and marvel at what a burden of knowledge and implicit validation the compressed language of a finished “appreciation” can be forced to carry, but one cannot help being concerned about the conclusions. Upon what foundations do those clever statements rest?

If the final products were at least based upon documented intermediate inputs, the uneasiness might be somewhat less. But in my own experience the “contributions” or inputs, with the exception of certain economic papers, are normally devoid of any specific identification of the kinds and types of reports or other evidence upon which they are based. And in my experience those inputs are often based on other inputs prepared at a lower echelon until at last we reach the analyst with access to the raw data. At the upper level of joint or national discussion and negotiation and compromise, which eventuates in the exquisite nuance, the carefully hedged phrase, or sometimes a dissenting footnote, the remove from the original evidence can be, and often is, considerable.

Statistics: Posted by General Patton — Thu Apr 04, 2013 6:53 pm

Show more