Subscribe: plus.maths.org
http://plus.maths.org/content/rss.xml
Added By: Feedage Forager Feedage Grade B rated
Language:
Tags:
amount money  amount  average  envelope  information  message  money  number  numbers  people  person  probability  relationships 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: plus.maths.org

plus.maths.org





 



Arithmetic billiards

Tue, 24 Apr 2018 14:55:13 +0000

By Antonella PeruccaMathematical billiards is an idealisation of what we experience on a regular pool table. In mathematical billiards the ball bounces around according to the same rules as in ordinary billiards, but it has no mass, which means there is no friction. There also aren't any pockets that can swallow the ball. This means that the ball will bounce infinitely many times on the sides of the billiard table and keep going forever. One fascinating aspect of mathematical billiards is that it gives us a geometrical method to determine the least common multiple and the greatest common divisor of two natural numbers. Have a look at the Geogebra animation below (the play button is in the bottom left corner) and try to figure out how the construction works. If you would like to play the animation again, double click the refresh button in the top right corner. The two natural numbers are 40 and 15 in this case. The least common multiple of 40 and 15 equals 120, and the greatest common divisor is 5. scrolling="no" title="Arithmetic billiards" src="https://www.geogebra.org/material/iframe/id/ujYNgZbD/width/443/height/286/border/888888/smb/false/stb/false/stbh/false/ai/false/asb/false/sri/true/rc/false/ld/false/sdz/true/ctl/false" width="443px" height="286px" style="border:0px;"> The basics Here is the basic idea. Suppose we are given two positive whole numbers and , neither of which is a multiple of the other (the case where one is a multiple of the other is easy and is left to the reader). For the billiard table we take a rectangle whose sides have lengths and . We shoot a billiard ball from one corner (the bottom left in the figure above) making a 45 degree angle with the sides. The billiard ball bounces off the rectangle’s sides. It does not lose speed and, by the law of reflection, is reflected at a 45 degree angle each time it meets a side (thus the path only makes left or right 90 degree turns). The path of the billiard ball consists of line segments. We claim that the ball eventually hits a corner and that the least common multiple of and is the length of the path the ball has traversed until it hit the corner, divided by . If you decompose the billiard table into unit squares (squares of side length one), the least common multiple is equal to the number of unit squares which are crossed by the path. We also claim that the path crosses itself. The first segment of the path contains the point of self-intersection which is closest to the starting point. The greatest common divisor of the two given numbers, we claim, is the distance from the starting point to the closest point of self-intersection, divided by . It is also equal to the number of unit squares crossed by the piece of the path from the starting point to the first point of self-intersection. Mirror billiards While looking at an object in a mirror, you have the impression that the object is behind the mirror. Notice that three points are aligned: the point marking the your position, the point on the mirror where you see the reflection of the object and the (imaginary) point behind the mirror where you believe the object to be. To prove our claims above, we are going to exploit this simple idea, the mirror being one side of the billiard table. The least common multiple of and , written as , is the smallest natural number that is a multiple of both and . We can write it as for some positive whole numbers and . For example, if and , then and can be written as       In this case and . Given our two numbers and , none of which is a multiple of the other, start by forming a square with sides of length . Notice that this can be decomposed into rectangles with sides of length and . That’s because fits into a total of times and fits into a total of times. Because is the least common multiple of and our square is the smallest square that can be tiled by rectangles with sides of lengths and in this way. We’ll call the bottom left of these rectangles and the[...]



The maths of randomness

Fri, 20 Apr 2018 14:03:29 +0000

By Martin Hairer For an idea we are all familiar with, randomness is surprisingly hard to formally define. We think of a random process as something that evolves over time but in a way we can’t predict. One example would be the smoke that comes out of your chimney. Although there is no way of exactly predicting the shape of your smoke plume, we can use probability theory – the mathematical language we use to describe randomness – to predict what shapes the plume of smoke is more (or less) likely to take. A beamsplitter Smoke formation is an example of an inherently random process, and there is evidence that nature is random at a fundamental level. The theory of quantum mechanics, describing physics at the smallest scales of matter, posits that, at a very basic level, nature is random. This is very different to the laws of 19th century physics, such as Newton's laws of motion (you can read more about them here ). Newton's physical laws are deterministic: if you were god and you had a precise description of everything in the world at a particular instant, then you could predict the whole future. This is not the case according to quantum mechanics. Think of a beam splitter (a device that splits a beam of light in two); half of the light passes through the beam splitter, and half gets reflected off its surface.What would happen if you sent a single photon of light to the beam splitter? The answer is that we can't know for sure. Even if you were completely omniscient and knew everything about that photon and the experiment, there's no way you could predict whether the photon will be reflected or not. The only thing you can say is that the photon will be reflected with a probability of 1/2, and go straight with probability of 1/2. (You can read more about quantum mechanics here.) Insufficient information Most of the times we use probability theory, it's not because we are dealing with a fundamentally random process. Instead, it's usually because we don't possess all the information required to predict the outcome of the process. We are left to guess, in some sense, what the outcome could be and how likely it is that different outcomes happen. The way we make or interpret these "guesses" depends on which interpretation of the concept of probability you agree with. One of the 2016 US presidential ballots (Photo Corey Taratuta CC BY 2.0) The first interpretation is subjective – you interpret a probability as a guess, made before the event, of how likely it is that a given outcome will happen. This interpretation is known as Bayesian, after the English statistician Thomas Bayes, who famously came up with a way of calculating probabilities based on the evidence you have. In particular, Bayes' theorem in all its forms allows you to update your beliefs about the likelihood of outcomes in response to new evidence. An example are the changing percentages associated with the outcomes of certain candidates winning future elections. It was a widely held belief that Hillary Clinton would win the US presidential election in 2016: going into the polls it was estimated she had a 85% probability of winning. As information about the poll became available, this belief was continuously updated until the likelihood reached 0%, indicating certainty that she lost. The other interpretation of probabilities is objective – you repeat the same experiment many times and record how frequently an outcome occurs. This frequency is taken as the probability of that outcome occurring and is called the frequentist interpretation of probabilities. (You can read more about these two interpretations in Struggling with chance.) Statisticians often argue about who is right: proponents of the first interpretation (the Bayesians) and those who believe more in the second interpretation (the frequentists). (For example, what would a frequentist interpretation of the probability of Clinton’s election chances be? You can't repeat an election multiple times!) Agnostic statist[...]



The maths of randomness: universality

Thu, 19 Apr 2018 15:53:03 +0000

By Martin Hairer Despite the fact that randomness is surprisingly hard to define, we do have a well defined way of describing randomness mathematically with probability theory. (You can read more in the previous article.) And there are two guiding principles in understanding probabilities: symmetry and universality. Universality, the second guiding principle in understanding probability, is a more subtle concept than symmetry. (You can read about the principle of symmetry here.) The idea of universality is that that if an outcome is a consequence of many different sources of randomness, then the details of the underlying mechanisms should not matter. At larger scales The behaviour of all fluids can be described by the same mathematics. This concept comes from theoretical physics. Fluids, for example, all behave in much the same way, even though they are made up of molecules with very different shapes and properties. If you look at the molecules in two different fluids microscopically, you would find that they look very different. If you looked at how the two fluids behave at large scales, however, you would see very similar behaviour. The behaviour of all fluids can be described with the same mathematical model, and one which has which has only a few parameters. (You can read more about fluid mechanics in Going with the flow and Maths in a minute: The Navier-Stokes equations.) It's a similar story in probability theory. At large scales information is aggregated and, sometimes, the same mathematics can describe the outcomes of different underlying processes. An example of a mathematical theorem that captures this concept is the central limit theorem. This theorem says if we compound many random quantities, the result will always follow a "bell curve" – the shape of the normal distribution. (You can read more about the central limit theorem here.) The central limit theorem is universal. It says that a large set of averages of samples of the property you are measuring will follow the normal distribution, even if the distribution of the property you are measuring is itself not normal. The distribution for flipping a coin many times is uniform: you will get it landing heads half the time and tails half the time. But if you flip a coin 100 times and take the average (with 0 for heads and 1 for tails), then flip a coin another 100 times and take the average, and so on, until you have lots of averages, then these averages will be normally distributed with mean 0.5. Similar to the case for fluid dynamics, the microscopic details of the distribution of the underlying property can be ignored, as the macroscopic view of the distribution of averages will always be normal. Brownian motion One of the most famous and surprising examples of universality comes from the discovery of Brownian motion. In 1827 the Scottish botanist Robert Brown was looking at pollen grains suspended in water under a microscope. He observed highly irregular motion in the microscopic particles released by the pollen grains that he could not explain. Intrigued, he went on to conduct many experiments, ruling out any external causes for this jittery motion from the surrounding environment or the experimental set-up, and any internal causes, such as the particles being organic. (He observed the same motion in a inorganic particles, such as coal dust.) This jittery motion – Brownian motion – was independently explained by Albert Einstein and Marian Smoluchowski in 1905: the vibrating fluid molecules in the liquid give tiny kicks to the microscopic particles, and these many independent tiny kicks accumulated to buffet the microparticles about. width="560" height="315" src="https://www.youtube.com/embed/cDcprgWiQEY?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen> The resulting Brownian motion is random – you can't predict with certainty the position of one of these microparticles from one moment to the next. But you can assign [...]



The maths of randomness: symmetry

Thu, 19 Apr 2018 15:39:28 +0000

By Martin Hairer Despite the fact that randomness is surprisingly hard to define, we do have a well defined way of describing randomness mathematically with probability theory. (You can read more in the previous article.) And there are two guiding principles in understanding probabilities: symmetry and universality. Symmetry comes into play in probability theory in the following way: if different outcomes are equivalent they should have the same probability. Two outcomes might look different to you (say a coin landing on heads versus it landing on tails), but the process that produces the outcomes (the mechanics of the spinning coin) is entirely indifferent to which occurs. (You can read more in Struggling with chance.) This symmetry leads us to judge the probability of a coin landing tails should be the same as it landing heads: giving them equal probabilities of 1/2. Similarly the probability of rolling any of the six numbers on a die should be the same: so each has a probability of 1/6. Symmetry is a powerful guiding principle in probability theory, as it is in many areas of mathematics. But you have to be careful, even in simple situations, when using it to apply probabilities to the real world. Two envelopes For example, imagine you have two envelopes. They both contain a cheque, one for twice as much money as the other. You choose one envelope and look inside and see it contains a cheque for a certain amount. Now you have one chance to decide whether to keep that money, or switch envelopes. What should you do? Write for the amount that's in your chosen envelope. This means that the amount of money in the other envelope is either or . The probability that it's is and so is the probability that it's . So the expected amount you'll get is       Since that's bigger than , you should always swap envelopes. But what if you'd been asked to choose before you looked inside the first envelope? You would have changed your mind, then changed your mind again – obviously something silly is going on! This isn't really a paradox once you realise that the different possible outcomes are not symmetric. If the cheque in your first envelope said £100,000 – you'd keep it as the threat of losing £50,000 might seem too great. But you might be happy to gamble if you saw £5 in your first envelope. More importantly, you have to include your prior beliefs of how much the person handing you the two envelopes would be willing to part with. Symmetry suggests that all possible amounts are equally likely which is not only entirely unrealistic, but also leads to an ill-posed probabilistic model. The situation as described above doesn't have enough information to build a complete model. (You can read a full explanation of this famous paradox in The two envelope problem solved.) This is a trivial and unlikely example, but the point it makes is important. Blindly following the principle of symmetry could lead you to the wrong mathematical model, or indeed to trick you into wrongly thinking you have enough information to create such a model at all. And the consequence of using the wrong mathematical model can be dramatic. The danger of flawed statistics In 1999 Sally Clark was tried in court over the murder of two of her children. The defence argued that both children died of Sudden Infant Death Syndrome (SIDS or "cot death"). An expert witness for the prosecution (who has since been discredited) argued that the probability of two children dying of SIDS was the square of probability of one child dying of SIDS – leading to a value of 1 in 73 million for the frequency of two cases of SIDS in such a family. But this argument assumes two such deaths had been independent, whereas it is highly likely that there are unknown environmental or genetic factors that might predispose a family to SIDS, making a second death much more likely. This incorrect mathematical mod[...]



The two envelopes problem solved

Wed, 18 Apr 2018 12:06:33 +0000

The two envelope problem is a famous paradox from probability theory (which we first presented on Plus back in September). Imagine you are given two envelopes, one of which contains twice as much money as the other. You're allowed to pick one envelope and keep the money inside. But just before you open your chosen envelope you are given the chance to change your mind. Should you stick with the envelope you picked first or switch? To find out write for the amount that's in your chosen envelope. This means that the amount of money in the other envelope is either or . The probability that it's is and so is the probability that it's . So the expected amount you'll get for switching is       Since that’s bigger than , you should swap envelopes. But what if you are given another chance to swap envelopes after you have changed your mind once? By the same reasoning as above you should swap back again. And then, by the same argument again, you should swap a third time, and so on, forever. You end up in an infinite loop of swapping and never get any money at all. What’s wrong with this reasoning? A resolution Let’s write for the envelope you picked at first and for the other one. We write for the amount of money in . Now since we haven’t opened envelope , isn’t a fixed amount: it’s a random variable. It can take one of two values: the smaller amount of money that’s hidden in the two envelopes or the larger amount of money. Let’s write for the smaller amount and for the larger amount (recall that one envelope contains twice as much money as the other). Since you have picked randomly, there’s a 50:50 chance that contains either of the two amounts. This means that the expected amount of money in envelope is       We said above that the expected amount in envelope is     (1) But recall that isn’t a fixed amount but can take one of two values. In the case that envelope contains , envelope contains the smaller amount of money, so In the case that envelope contains , envelope contains the larger amount of money, so So in formula (1) above, the first really stands for and the second stands for . The two in the formula are actually different and shouldn’t have been added up to give Substituting the for the first appearance of in (1) and for the second gives       Thus so there is no incentive to switch envelopes and hence no paradox. What if you open envelope A? What if we had already opened envelope , to find inside, before being offered the chance to switch? Can we still produce the apparent paradox? If you have opened envelope then is a fixed amount of money. There’s a 50:50 chance of finding or in envelope , so the expected amount in envelope is       We heard about the two envelopes problem in a talk by the Fields medallist Martin Hairer at the Heidelberg Laureate Forum 2017. Foto: Bernhard Kreutzer for HLF (c) Pressefoto Kreutzer. The formula is now correct. It tells you that on average (if you repeated the same wager many times with the same amount in envelope ), you’d do better by switching. The paradox doesn’t arise. If after switching to envelope you are given the chance to switch back again, you won’t because you already know that the amount in is less than the expected amount in . The paradox arose in the original version because both envelopes could be treated the same — the situation was symmetric. Once you have opened envelope , however, the symmetry is broken. Notice, however, that opening envelope and seeing the amount may change your mind about the probability that envelope contains or For example, if is a very large amount, then you might think it very unlikely that envelope contains the even lar[...]



Error correcting codes

Tue, 17 Apr 2018 12:07:36 +0000

By Chris BuddThis article is based on a talk in Chris Budd's ongoing Gresham College lecture series. You can see a video of the talk below and there is another article based on the talk here. We are surrounded by information and are constantly receiving and transmitting it to other people all over the world. With good reason we can call the 21st century the information age. But whenever a message is being sent, be it over the phone, via the internet, or via satellites that orbit the Earth, there is the danger for errors to creep in. Background noise, technical faults, even cosmic rays can corrupt the message and important information may be lost. Rather amazingly, however, there are ways of encoding a message that allow errors to be detected, and even corrected, automatically. Here is how these codes work. The 15th century scribe, manuscript illuminator, translator and author Jean Miélot at his desk. Error detecting codes The need for the detection of errors has been recognised since the earliest scribes copied manuscripts by hand. It was important to copy these without error, but to check every word would have been too large a task. Instead various checks were used. For example, when copying the Torah the letters, words, and paragraphs were counted, and then checked against the middle paragraph, word and letter of the original document. If they didn't match, there was a problem. Modern digital information is encoded as sequences of 0s and 1s. When transmitting binary information a simple check that is often used involves a so-called hash function, which adds a fixed-length tag to a message. The tag allows the receiver to verify the delivered message by re-computing it and comparing it with the one provided in the message. A simple example of such a tag comes from including a check or parity digit in each block of data. The simplest example of this method is to add up the digits that make up a message and then append a 1 if the sum is odd and a 0 if it is even. So if the original message is 111 then message sent is 1111, and if the original message is 101, then the message sent is 1010. The effect of this is that every message sent should have digits adding up to an even number. On receiving the transmission the receiving computer will add up the digits, and if the sum is not even then it will record an error. An example of the use of this technology can be found on the bar codes that are used on nearly all mass-produced consumer goods sold. Such consumer products typically use either UPC-A or EAN-13 codes. UPC-A describes a 12-digit sequence, which is broken into four groups. The first digit of the bar code carries some general information about the item: it can either indicate the nationality of the manufacturer, or describe one of a few other categories, such as the ISBN (book identifier) numbers. The next five digits are a manufacturer's identification. The five digits that follow are a product identification number, assigned by the manufacturer. The last digit is a check digit, allowing a scanner to validate whether the barcode has been read correctly. Similar check digits are used in the numbers on your credit card, which are usually a sequence of decimal digits. In this case the Luhn algorithm is used (find out more here). Error correcting codes Suppose that we have detected an error in a message we have deceived. How can we proceed to find the correct information contained in the message? There are various approaches to this. One is simply to (effectively) panic: to shut the whole system down and not to proceed until the problem has been fixed. The infamous blue screen of death, pictured below and familiar to many computer operators, is an example of this. Blue Screen of Death on Windows 8. The rationale behind this approach is that in some (indeed many) cases it is better to do nothing than to do something which you know is wrong. How[...]



Genetics: Nature's digital code

Tue, 17 Apr 2018 11:11:03 +0000

By Chris BuddThis article is based on a talk in Chris Budd's ongoing Gresham College lecture series. You can see a video of the talk below and there is another article based on the talk here. The phylogenetic tree of life as produced by Ernst Haeckel. When a green-eyed woman has a child with a blue-eyed man, what colour will the child's eyes be? Well, in all likelihood they won't be turquoise, but either blue or green, or perhaps even brown. This fact gives us a clue as to how nature deals with the genetic information that's passed on from one generation to the next. Nature doesn't work with a continuous spectrum of possibilities, but with separate either/or alternatives. This is similar to how computers transmit the reams of information in the digital world, which is made of the discrete units of 0 and 1. In this article we look at the digital nature of genetic information, and how it appears to be cleverly encoded so that not too many mistakes are made when this information is passed on through the generations. The colour of peas The most famous figure in the field of genetics is Charles Darwin, who published his groundbreaking book On the origin of species in 1859. Despite giving an explanation of how species would change and evolve, Darwin was unaware of the mechanism behind this process. The digital nature of genetic information was discovered in 1866 by the monk Gregor Mendel. However, the precise mechanism was only identified a hundred years later with the discovery of the structure of DNA. Mendel was wondering how phenotypes (or traits) of a species are inherited from one generation to another. An example of such in humans might be eye colour, or height. Mendel instead looked at the colour of peas. He bred successive generations of peas of different colours and made very careful quantitative measurements of what he observed. In particular he observed that if he took as a parent generation yellow and green peas and crossed them, then all of the next generation were yellow. If he then allowed each plant from the second generation to pollinate itself (peas can do that) he obtained both yellow and green peas in a ratio of 3:1. This already showed a high degree of mathematical regularity. If he then self-pollinated future generations he ended up with a quarter yellow peas, a quarter green peas, and the remaining half a mixture of both yellow and green, again in the 3:1 ratio. Gregor Mendel. This was evidence of the discrete nature of inheritance rather than a blending of the yellow and green colours. He explained this by postulating that two factors were responsible for the colour of the pea, a yellow Y factor and a green G factor (today these factors would be called alleles of the same gene). Each plant would have two of these factors (so either YY, YG, GY or GG). A green pea would only arise if there were two G factors. When getting together with another plant to cross-pollinate, a parent plant would pass one of these factors on to its offspring, with each factor having a 50:50 chance of being passed on. A plant getting together with itself to self-pollinate can be regarded as two plants with the same pair of factors cross-pollinating. Mendel started by cross-pollinating yellow plants with the pair of factors YY and green plants with the pair of factors GG. The second line in the diagram below shows what happens when you combine the first factor from the first parent plant with the first factor from the second parent plant, the first factor from the first plant with the second factor of the second, and so on. It turns out that all offspring plants in this second generation of peas will have the combination YG. The third line shows what combinations of Y and G are possible for the third generation as a result of self-pollination of the second generation. The fourth line shows the combinations that are possible for t[...]



A very useful pandemic

Fri, 13 Apr 2018 13:20:42 +0000

By Rachel ThomasIt may just be a matter of time before the next influenza pandemic strikes but you don't have to live your life in fear says Stephen Kissler, from the Cambridge Infectious Diseases group in DAMTP. "There have been quite a few [pandemics] since the 1918 Spanish flu, and none have come close [to its terrible death toll]." And Kissler has another reason to be cheerful: he is part of a group of Cambridge mathematicians who have just worked with the BBC on a programme, Contagion! The BBC Four Pandemic, and together they have run a ground-breaking citizen science experiment that will help fight future pandemics. There were two parallel aims for the project. One was producing the documentary with the BBC, describing the science and mathematics behind epidemiology, and the work that is being done on modelling and preparing for influenza epidemics and pandemics. "The second objective was more scientific," says Kissler. "This was to conduct a massive citizen science experiment which collected (with permission) data on people's movement, their reported contacts, and some personal information." People participated (and can continue to participate, see below) by downloading the BBC Pandemic app on their smartphone (via the App Store or Google Play). The app asked for their age and gender, and then recorded their location every hour for 24 hours. At the end of that period the participant then recorded how many people they had contact with over that period, and some details about the interactions (eg. conversational versus physical contact, age of contact). The result has been a "state of the art data set which will change how we develop infectious disease models," says Kissler. The original aim of the project was for around 10,000 people to participate. The project far surpassed that aim with nearly 30,000 people around the UK took part, making it the biggest citizen science experiment of its type, and the biggest such data set ever created. "There are no datasets on interpersonal contact that are that big," says Kissler. "The really crucial thing is this data links reported contacts with movement data, with how these people moved about in space. This has never been done before." Modelling gravity One of the models used by epidemiologists like Kissler is called a gravity model, named because of its similarity to Isaac Newton's law of gravitation. Newton's law says that the gravitational attraction between two bodies (of masses and ) is proportional to       where is the distance between them. Researchers believe a similar rule is true for how people move between two locations. "A gravity model describes how the frequency of human movement decays with distance, just like the force of gravity decays with distance," says Kissler. This model of movement then forms part of a model of how a disease is spread, as the infection passes from person to person. In particular, it is used to estimate the likelihood that one city or region will become infected at a certain time, depending on the infectious status of the neighbouring cities and regions, their sizes and distance away. "The new data means we can test this model for its accuracy for predicting how people will move," says Kissler. The data will give them more accurate parameters for the model, in particular the power to which you raise the distance (the power of is 2 in Newton's law). And because of the fantastic detail available in the new dataset, the researchers can see how this varies for different ages. For the first time they now have detailed insight into the differences in movement patterns between age groups – a key thing they are interested in as epidemiologists. "A certain amount is known about how adults move around, how they commute to work and so on," says Kissler. "But we have no idea how people under 20 [...]



Britain in love

Thu, 05 Apr 2018 16:13:09 +0000

By Marianne FreibergerThe Guardian is currently running the relationship project, a "state of the nation report on Brits in love". The project is run in partnership with TSB with the research being done by Ipsos MORI. In one of the project's articles we came across the following sentence: "The average Brit will have had three long-term romantic relationships in their lifetime, instigating 2.29 break-ups themselves." Not all relationships last forever, but the numbers need to balance. This, we thought, seems mathematically impossible. For every person who is instigating a break-up there's a person who is being broken up with, so the numbers should balance with every Brit instigating, on average, 1.5 break-ups. It’s actually not too hard to prove this. Imagine a population of people in which there have been long-term relationships in total. This means that on occasions someone has done the breaking up and someone has been broken up with. The average number of times a person instigates a break-up is therefore and equals the average number of times a person has been broken up with. Adding those two together gives the average number of relationships per person, regardless of who did the breaking up: . Since it follows that , the average number of times a person instigates a break-up, is 1.5. This calculation assumes that all the relationships have finished, which of course isn’t the case in reality. However, the average number of finished relationships per person is less than or equal to the average number of relationships per person. So if we take account of the fact that some relationships might still be going on, but still write for the number of finished relationships, we get that , which means that It definitely can’t be 2.29. So what is behind the Guardian's statement? One possibility is that it was clumsily phrased, and that the figure of 2.29 was calculated with all broken-up relationships in mind, not just the long-term ones. Indeed, in other articles belonging to the relationship project the figure is quoted without any reference to long-termism. If that is the case, then all is well mathematically, and the problem down to fuzzy writing. We couldn't resist, however, to see if there are other explanations. For example, could it be that the statement refers, not to the mean average, but to the median? The mean average of a list of numbers is the sum of the numbers divided by how many numbers there are in total. It's the kind of average we considered above. To get the median, you list all your values in numerical order, including repeated ones, and find the number that's right in the middle of your list (if there isn't a middle because there are an even number of values, then the median is the value that lies half-way between the two middle values). Thus, the median of the list 1, 2, 3, 4, 5 is 3, and the median of the list 1, 2, 3, 4 is 2.5. The median is often used to define the average person with respect to some activity or characteristic, such as long-term relationships: of you lined all Brits up in order of how many long-term relationships they have had, the median would be marked by the person right in the middle, which is a good reason for calling them an average person. The mean, on the other hand, tells us how many long-term relationships there would be per person if the relationships were distributed evenly in the population. Relationships between a population of five people. An arrow between two nodes means that they two corresponding people have had a relationship. The direction of the arrow indicates who broke up with whom: an arrow pointing from node x to a node y means that node x ended the relationship. The median number of relationships is 2, the median number of break-ups instigated is 1, and the median number of break-ups "received" is 0.[...]



Join an edit-a-thon to improve the presence of female mathematicians

Thu, 05 Apr 2018 16:08:09 +0000

Our fellow maths journal Aperiodical is coordinating a distributed Wiki Edit Day on Saturday 12th May (the birthday of mathematician Florence Nightingale). They are inviting as many people as possible to join in, from wherever they are, to edit and improve the presence of female mathematicians on Wikiquote's maths quotes page, and possibly to continue editing elsewhere on Wikimedia.

The editing will happen on Saturday 12th May 2018, from 10am-3pm. Join in by loading a shared Google document from 10am on 12th May, finding things to add (and references), and making edits to the pages. Local editing sessions will be organised in different locations, and details will be shared in the doc, as well as a link to a video Hangout everyone can join in with to discuss edits.

There is more information on this Aperiodical blog post

As Florence Nightingale said:

"I attribute my success to this – I never gave or took any excuse."

(image)