Apparently, it was expected by some (follow my gaze…) that this blog would talk more about Science and my Ph.D than humus and camels. So for the nerds out there who are interested in understanding a little bit more of what I am doing in my research, I will try to be clear and concise.

My main interest was to study decision-making process. My lab is specialized in neurosciences and instead of mice or monkeys, we use Archer Fish for our experiments, a special type of fish from South East Asia that spits at bugs to make them fall and eat them.

Originally, I don’t really have a passion for fish except for grilled or sushied ones, but after three years working with them and being spat at, I grew to find them cute. The purpose of my research is to investigate whether those fish can make rational decisions or if they behave as irrationally as, let’s say, a human.

In 1947, two economists came up with a theorem to define what a rational individual should do. According to their theory, we should always seek to maximize our expected utility, which is defined by the size of a gain multiplied by its probability. For example, if we play a game where an fair coin is flipped and the price is 10$ for tail and 2$ for head, the expected utility is 6$ (10 x 0.5 + 2 x 0.5). They formalized their theorem in four axioms:

**Completeness**: an individual has clear preferences and can always choose between two alternatives.**Transitivity**: if I like oranges more than pears (oranges > pears) and pears more than apples (pears > apples), I should always pick an orange over an apple (orange > apple).**Independence**: if I like one brand of cereals more than an other one (brand A > brand B), having a similar toy added in both boxes shouldn’t make me choose my least favorite brand (brand A + toy > brand B + toy).**Continuity**: that’s the trickiest. It says that if your preference goes like A > B > C, there is a case where A + C = B.

As an example let’s assume that we all prefer to win 2$ over not winning anything over dying (2$ > 0$ > death). If I’d ask you if you would be ready to take a bet where you win 2$ with a chance to die on one hand, or to win nothing on the other hand but to not die either, you would probably tell me that no one would accept such a deal. But let’s assume you want to buy a Coke and at the corner shop on the other side of the road, it’s 2$ cheaper than at the shop where you are now. If you cross the street, you accept the small chance of being rolled over to save 2$…

In 1979, two Israeli psychologists, Daniel Kahneman and Amos Tversky published the Prospect Theory. Kahneman won the Nobel Price in 2002 for their work but unfortunately Tversky had passed away in 1996 and thus didn’t get it. If you are interested in the question, I recommend Daniel’s book “Thinking Fast and Slow” that develops many of his findings with a lot of concrete examples and striking evidences of how much our brain lies to us.

The big difference between the expected utility’s and the prospect theory’s approaches is that the latest claims that we don’t make our decisions based only on the absolute value of the output but on relative losses and gains. A simple proof: when buying milk, we are ready to compare all the brands to save a few cents but if the car we want to buy comes with an amazing new gadget for a few hundreds dollars, we wouldn’t think twice. Also, one dollar won does not provide the same magnitude of joy than the pain caused by the loss of one dollar. From these observations, the two psychologists shone a new light at our seemingly inconsistent decisions. They ran many experiments to investigate and confirm their assumptions.

One in particular aimed at showing that humans don’t like risks generally, but that in the presence of a lose-lose situation we are more willing to take some. To demonstrate this they made a very simple experiment:

Students got to choose between two options: win 3000 (old) shekels for sure, or 4000 with 20% chances of winning nothing. As you probably guessed, most people preferred the safe option, i.e. got 3000 shekels. Now they asked a second questions: they divided the chances of winning by 4 so now they had 25% chances of winning 3000 or 20% chances of winning 4000. As you may have figured out for yourself, most students picked the 4000 option this time, which violates the independence axiom of the expected utility theory.

The explanation for this bias is pretty simple: since we hate losing, we pick a safe bet when possible. But if we know there is a fair chance of lost anyway, we will be willing to take risks. Now my question is: Are fish also risk-averse…? TBC

Yeah right you find them cute. You keep cutting their brain out!!

On another note: you can add irrationality on the present-future axis http://www.nature.com/news/sustainability-game-human-nature-1.19417

“Today, Zurich residents use less energy than the average person in Switzerland, who in turn uses only about half of that used by most US residents.”

I see why you shared this article 😛 But cool one and Hopp Zuri!