Subscribe: Computational Complexity
http://weblog.fortnow.com/rss.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
don  give  good  make  matching  much  number  problem  problems  pumping lemma  regular  science  show  students  time 
Rate this Feed
Rating: 2.7 starRating: 2.7 starRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Computational Complexity

Computational Complexity



Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch



Last Build Date: Fri, 17 Nov 2017 12:30:38 +0000

 



A Tale of Three Rankings

Thu, 16 Nov 2017 13:42:00 +0000

In the Spring of 2018 the US News and World Report should release their latest rankings of US graduate science programs including computer science. These are the most cited of the deluge of computer science rankings we see out there. The US News rankings have a long history and since they are reputation based they roughly correspond to how we see CS departments though some argue that reputation changes slowly with the quality of a department.

US News and World Report also has a new global ranking of CS departments. The US doesn't fare that well on the list and the ranking of the US programs on the global list are wildly inconsistent with the US list. What's going on?

75% of the global ranking is based on statistics from Web of Science. Web of Science captures mainly journal articles where conferences in computer science typically have a higher reputation and more selectivity. In many European and Asian universities hiring and promotion often depend heavily on publications and citations in Web of Science encouraging their professor to publish in journals thus leading to higher ranked international departments.

The CRA rightly put out a statement urging the CS community to ignore the global rankings, though I wished they made a distinction between the two different US News rankings.

I've never been a fan of using metrics to rank CS departments but there is a relatively new site, Emery Berger's Computer Science Rankings, based on the number of publications in major venues. CS Rankings passes the smell test for both their US and global lists and is relatively consistent with the US News reputation-based CS graduate rankings.

Nevertheless I hope CS Rankings will not become the main ranking system for CS departments. Departments who wish to raise their ranking would hire faculty based mainly on their ability to publish large number of papers in major conferences. Professors and students would then focus on quantity of papers and this would in the long run discourage risk-taking long-range research, as well as innovations in improving diversity or educating graduate students.

As Goodhart's Law states, "when a measure becomes a target, it ceases to be a good measure". Paradoxically CS Rankings can lead to good rankings of CS departments as long as we don't treat it as such.



Can you measure which pangrams are natural

Mon, 13 Nov 2017 14:48:00 +0000

A Pangram is a sentence that contains every letter of the alphabet

The classic is:

                                      The quick brown fox jumps over the lazy dog.

(NOTE- I had `jumped' but a reader pointed out that there was no s, and that `jumps' is the correct word)

which is only 31 letters.

I could give a pointer to lists of such, but you can do that yourself.

My concern is:

a) are there any pangrams that have actually been uttered NOT in the context of `here is a pangram'

b) are there any that really could.

That is- which pangrams are natural?  I know this is an ill defined question.

Here are some candidates for natural pangrams

1) Pack my box with five dozen liquor jugs

2) Amazingly few discotheques provide jukeboxes

3) Watch Jeopardy! Alex Trebek's fun TV quiz game

4) Cwm fjord bank glyphs vext quiz
(Okay, maybe that one is not natural as it uses archaic words. It means
``Carved symbols in a mountain hollow on the bank of an inlet irritated an
eccentric person'  Could come up in real life. NOT. It uses every letter
exactly once.)

How can you measure how natural they are?

For the Jeopardy one I've shown it to people and asked them

``What is unusual about this new slogan for the show Jeopardy?''

and nobody gets it. more important- they believe it is the new slogan.

So I leave to the reader:

I) Are the other NATURAL pangrams?

II) How would you test naturalness of such?

Pinning down `natural' is hard. I did a guest post in 2004 before I was an official co-blogger, about when a problem (a set for us) is natural, for example the set all regular expressions with squaring (see here).



Advice for the Advisor

Thu, 09 Nov 2017 19:08:00 +0000

A soon-to-be professor asked me recently if I could share some ideas on on how to advise students. I started to write some notes only to realize that I had already posted on the topic in 2006.
Have students work on problems that interest them not just you. I like to hand them a proceedings of a recent conference and have them skim abstracts to find papers they enjoy. However if they stray too far from your research interests, you will have a hard time pushing them in the right directions. And don't work on their problems unless they want you to.
Keep your students motivated. Meet with them on a regular basis. Encourage students to discuss their problems and other research questions with other students and faculty. Do your best to keep their spirits high if they have trouble proving theorems or are not getting their papers into conferences. Once they lose interest in theory they won't succeed.
Feel free to have them read papers, do some refereeing and reviewing, give talks on recent great papers. These are good skills for them to learn. But don't abuse them too much.
Make sure they learn that selling their research is as important as proving the theorems. Have them write the papers and make them rewrite until the paper properly motivates the work. Make them give practice talks before conferences and do not hold back on the criticism.
Some students will want to talk about some personal issues they have. Listen as a friend and give some suggestions without being condescending. But if they have a serious emotional crisis, you are not trained for that; point them to your university counseling services.
Once it becomes clear a student won't succeed working with you, or won't succeed as a theorist or won't succeed in graduate work, cut them loose. The hardest thing to do as an advisor is to tell a student, particular one that tries hard, that they should go do something else. It's much easier to just keep them on until they get frustrated and quit, but you do no one any favors that way.
Computer science evolves dramatically but the basic principles of advising don't. This advise pretty much works now as well as it did in 2006, in the 80's when I was in the student or even the 18th century. Good advising never goes out of style.

Of course I don't and can't hand out a physical proceedings to a student to skim. Instead I point to on-line proceedings but browsing just doesn't have the same feel.

Looking back I would add some additional advice. Push your students and encourage them to take risks with their research. If they aren't failing to solve their problems, they need to try harder problems. We too often define success by having your paper accepted into a conference. Better to have an impact on what others do.

Finally remember that advising doesn't stop at the defense. It is very much a parent-child relationship that continues long after graduation. Your legacy as a researcher will eventually come to an end. Your legacy as an advisor will live on through those you advise and their students and so on to eternity.



The two fears about technology- one correct, one incorrect

Mon, 06 Nov 2017 16:44:00 +0000


When the luddites smashed loom machines their supporters (including Lord Byron, Ada Lovelaces father) made two arguments in favor of the luddites (I am sure I am simplifying what they said):

  1. These machines are tossing people out of work NOW and this is BAD for THOSE people. In this assertion they were clearly correct. (`lets just retrain them' only goes so far).
  2. This is bad for mankind! Machines displacing people will lead to the collapse of civilization! Mankind will be far worse off because of technology. In this assertion I think they were incorrect. That is, I think civilization is better off now because of technology. (If you disagree leave an intelligent polite comment. Realize that just be leaving a comment you are using technology. That is NOT a counterargument. I don't think its even IRONY. Not sure what it is.) 
  3. (This third one is mine and its more of a question) If you take the human element out of things then bad things will happen. There was a TV show where a drone was going to be dropped on something but a HUMAN noticed there were red flowers on the car and deduced it was a wedding so it wasn't dropped. Yeah! But I can equally well see the opposite: a computer program notices things that indicate its not the target that a person would have missed. But of course that would make as interesting a story. More to the point- if we allow on computers to make decisions without the human elemnet, is that good or bad? For grad admissions does it get rid of bias or does it reinforce bias? (See the book Weapons of Math Destruction for an intelligent argument against using programs for, say, grad admissions and other far more important things.)
I suspect that the attitude above greeted every technology innovation. For AI there is a similar theme but with one more twist: The machines will eventually destroy us! Bill Gates and Steven Hawkings have expressed views along these lines.

When Deep Blue beat Kasparov in chess there were some articles about how this could be the end of mankind. That's just stupid. For a more modern article on some of the dangers of AI (some reasonable some not) see this article on watson.

It seems to me that AI can do some WELL DEFINED (e.g., chess) very well, and even some not-quite-so-well-defined things (Nat Lang translation) very well, but the notion that they will evolve to be `really intelligent' (not sure that is well defined) and think they are better than us and destroy us seems like bad science fiction (or good science fiction).

Watson can answer questions very very well, Medical diagnosis machines may well be much better than doctors. While this may be bad news for Ken Jennings and for doctors, I don't see it being bad for humanity in the long term. Will we one day look at the fears of AI and see that they were silly--- the machines did not, terminator-style, turn against us? I think so. And of course I hope so.
  





Matching and Complexity

Thu, 02 Nov 2017 14:44:00 +0000

Given a group of people, can you pair them up so that each pair are Facebook friends with each other? This is the famous perfect matching problem. The complexity of matching has a rich history which got a little richer in the past few months.

For bipartite graphs (consider only friendships between men and women), we have had fast matching algorithms since the 1950's via augmenting paths. In the 1965 classic paper, Path, Trees and Flowers, Jack Edmonds gives a polynomial-time algorithm for matching on general graphs. This paper also laid out an argument for polynomial-time as efficient computation that would lead to the complexity class P (of P v NP fame).

After Razborov showed that the clique problem didn't have polynomial-size monotone circuits, his proof techniques also showed that matching didn't have polynomial-size monotone circuits and Raz and Wigderson show that matching requires exponential size and linear depth. Because of Edmond's algorithm matching does have polynomial-size circuits in general. NOTs are very powerful.

Can one solve matching in parallel, say the class NC (Nick's class after Pippenger) of problems computable by a polynomial number of processors in polylogarithmic time? Karp, Upfal and Wigderson give a randomized NC algorithm for matching. Mulmuley, Vazirani and Vazirani prove an isolation lemma that allows a randomized reduction of matching to the determinant. Howard Karloff exhibited a Las Vegas parallel algorithm, i.e., never makes a mistake and runs in expected polylogarithmic time.

Can one remove the randomness? An NC algorithm for matching remains elusive but this year brought two nice results in that direction. Ola Svensson and Jakub Tarnawski give a quasi-NC algorithm for general graph matching. Quasi-NC means a quasipolynomial (2polylog) number of processors. Nima Anari and Vijay Vazirani give an NC algorithm for matching on planar graphs.

Matching is up there with primality, factoring, connectivity, graph isomorphism, satsfiability and the permanent as fixed algorithm problems that have played such a large role in helping us understand complexity. Thanks matching problem and may you find NC nirvana in the near future.



The k=1 case is FUN, the k=2 case is fun, the k\ge 3 case is... you decide.

Tue, 31 Oct 2017 12:17:00 +0000

 (All of the math in this post is in here.)


The following problem can be given as a FUN recreational problem to HS students or even younger: (I am sure that many of you already know it but my point is how to present it to HS students and perhaps even younger.)

Alice will say all  but ONE of the elements of {1,...,1010}in some order.

Bob listens with the goal of figuring out the number. Bob cannot possibly store 1010 numbers in his head. Help Bob out by giving him an algorithm which will not make his head explode.

This is an easy and fun puzzle. The answer is in the writeup  I point to above.

The following variant is a bit harder but a bright HS student could get it: Same problem except that Alice leaves out TWO numbers.

The following variant is prob more appropriate for a HS math competition than for a FUN gathering of HS students: Same problem except that Alice leaves out THREE numbers.

The following variant may be easier because its harder: Alice leaves out k numbers, k a constant. Might be easier then the k=3 case since the solver knows to NOT use properties of 3.

I find it interesting that the k=1, k=2, and k≥ 3 cases are on different levels of hardness.  I would like a more HS answer to the k≥ 3 case.




2017 Fall Jobs Post

Thu, 26 Oct 2017 15:04:00 +0000

You're finishing up grad school or a postdoc and ask yourself what should I do for the rest of my life? We can't answer that for you but we can help you figure out your options in the annual fall jobs post. We focus mostly on the academic jobs. You could work in industry but there's nothing like choosing your own research directions and working directly with students and taking pride in their success.

For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. AcademicKeys also lists a number of CS jobs. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.

It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up. Think global--there are growing theory groups around the world and in particular many have postdoc positions to offer.

The computer science job market remains hot with most CS departments trying to hire multiple faculty. Many realize the importance of having a strong theory group, but it doesn't hurt if you can tie your research to priority areas like big data, machine learning and security.

Remember in your research statement, your talk and your interview you need to sell yourself as a computer scientist, not just a theorist. Show interest in other research areas and, especially in your 1-on-1 meetings, find potential ways to collaborate. Make the faculty in the department want you as a colleagues not just someone hiding out proving theorems.

Good luck to all on the market and can't wait for our Spring 2017 jobs post to see where you all end up.



Open: PROVE the pumping and reductions can't prove every non-reg lang non-reg.

Mon, 23 Oct 2017 15:56:00 +0000

Whenever I post on regular langs, whatever aspect I am looking at, I get a comment telling me that we should stop proving the pumping lemma (and often ask me to stop talking about it) and have our students prove things not regular by either the myhill-nerode theorem or by kolm complexity. I agree with these thoughts pedagogically but I am curious: Is there a non-reg lang L such that you CANNOT prove L non-reg via pumping and reductions? There are many pumping theorems (one of which is iff so you could use it on all non-reg but you wouldn't want to-- its in the paper pointed to later). I'll pick the most powerful Pumping Lemma that I can imagine teaching a class of ugrads: If L is regular then there exists n0  such that for all w∈ L, |w| ≥ n0  and all prefixes x' of w with |w|-|x'| ≥ n0  there exists x,y,z such that |x| ≤ n0          y is nonempty w=x'xyz for all i ≥ 0 x'xyiz   ∈ L If this is all we could use then the question is silly: just take { w : number of a's NOT EQUAL number of b's } which is not regular but satisfies the pumping lemma above. SO I also allow closure properties. I define (and this differs from my last post--- I thank my readers, some of whom emailed me, for help in clarifying the question) A ≤ B if there exists a function f such that if f(A) = B then A regular implies B regular (e.g., f(A) = A ∩ a*b* ) (CORRECTION: Should be B Regular --> A regular. Paul Beame pointed this out in the comments.) (CORRECTION- My definition does not work. I need something like what one of the commenters suggested and what I had in a prior blog post. Let CL be a closure function if for all A, if A is regular than CL(A) is regular. Like f(A) = A cap a*b*.  Want a^nb^n \le numb a's = numb b's via f(A) = A cap a*b*. So want A \le B if there is a closure function f with f(B) = A. ) A set B is Easily proven non-reg if either a) B does not satisfy the pumping lemma, or b) there exists a set A that does not satisfy the pumping lemma such that A ≤ B. OPEN QUESTION (at least open for me, I am hoping someone out there knows the answer) Is there a language that is not regular but NOT easily proven non-reg? Ehrenfeucht, Parikh, Rozenberg in a paper Pumping Lemmas for Regular Sets (I could not find the official version but I found the Tech Report on line: here. Ask your grandparents what a Tech report is. Or see this post: here) from Lance about Tech Reports) proved an iff pumping lemma. They gave as their motivating example an uncountable number of languages that could not  be proved non-regular even with a rather fancy pumping lemma. But there lang CAN be easily proven non-reg. I describe that here. (This is the same paper that proves and iff Pumping Lemma. It uses Ramsey Theory so I should like it. Oh well.) SO, I looked around for candidates for non-reg languages that could not be easily proven non-regular. The following were candidates but I unfortunately(?) found ways to prove them non-regular using PL and Closure (I found the ways by asking some bright undergraduates, to give credit- Aaron George did these.) { aibj : i and j are relatively prime } {xxRw : x,w nonempty }  where R is Reverse. I leave it to the reader to prove these are easily proven non-regular. To re-iterate my original question: Find a non-reg lang that is not easily proven non-reg. Side Question- my definition of reduction seems a bit odd in that I am defining it the way I want it to turn out. Could poly-Turing reduction have been defined as A ≤ B iff if A is in P then B is in P? Is that equivalent to the usual definition? Can I get a more natural definition for my regular reductions? [...]



The Amazon Gold Rush

Thu, 19 Oct 2017 13:15:00 +0000


Unless you have hidden under a rock, you've heard that Amazon wants to build a second headquarters in or near a large North American city. Amazon put out a nice old fashioned RFP.
Please provide an electronic copy and five (5) hard copies of your responses by October 19, 2017 to amazonhq2@amazon.com. Please send hard copies marked “confidential” between the dates of October 16th – 19th to ...
Hard copies? Just like the conference submissions of old. Key considerations for Amazon: A good site, local incentives, highly education labor pool and strong university system, near major highways and airports, cultural community fit and quality of life.

I've seen companies put subsidiaries in other cities, or moved their headquarters away from their manufacturing center, like when Boeing moved to Chicago. But building a second headquarters, "a full equal" to their Seattle campus, seems unprecedented for a company this size. Much like a company has only one CEO or colleges have one President, having two HQs questions where decisions get made. Amazon is not a typical company and maybe location means less these days.

Atlanta makes many short lists. We've got a burgeoning tech community, a growing city, sites with a direct train into the world's busiest airport, good weather, low cost of living and, of course, great universities. Check out the Techlanta and ChooseATL.

So am I using Amazon's announcement as an excuse to show off Atlanta? Maybe. But winning the Amazon HQ2 would be transformative to the city, not only in the jobs it would bring, but in immediately branding Atlanta as a new tech hub. Atlanta will continue to grow whether or not Amazon comes here but high profile wins never hurt.

Many other cities make their own claims on Amazon and I have no good way to judge this horse race (where's the prediction market?). Impossible to tell how Amazon weighs their criteria and it may come to which city offers the best incentives. Reminds me of the Simons Institute Competition announced in 2010 (Berkeley won) though with far larger consequences.



Reductions between formal languages

Mon, 16 Oct 2017 15:59:00 +0000


Let EQ = {w : number of a's = number of b's }

Let EQO = { anbn : n ∈  N} (so its Equal and in Order)

Typically we do the following:

Prove EQO is not regular by the pumping lemma.

Then to show EQ is not regular you say: If EQ was regular than EQ INTERSECT a*b*= EQO is regular, hence EQ is not regular (I know you can also show EQ with the Pumping Lemma but thats not important now.)

One can view this as a reduction:

A  ≤  B

If one can take B, apply a finite sequence of closure operations (e.g., intersect with a regular lang,
complement, replace all a with aba, replace all a with e (empty string), ) and get A.

If A is not regular and A≤ B then B is not regular.

Note that

EQO ≤ EQ ≤ EQ

Since EQO is not regular (by pumping ) we get EQ and \overline{EQ} are not regular.

Hence we could view the theory of showing things not-reg like the theory of NP completeness
with reductions and such. However, I have never seen a chain of more than length 2.

BUT- consider the following! Instead of using Pumping Lemma we use Comm. Comp. I have
been able to show (and this was well known) that

EQ is not regular by using Comm. Comp:

EQH = { (x,y) : |x|=|y| and number of a's in xy = number of b's in xy }

Comm Complexity of EQH is known to be log n  + \Theta(1). Important- NOT O(1).

If EQ is regular then Alice and Bob have an O(1) protocol: Alice runs x through the DFA and
transmits to Bob the state, then Bob runs y from that state to the end and transmits 1 if ended up in an accept state, 0 if not.

But I was not able to show EQO is not regular using Comm Complexity. SO imagine a bizzaro world where I taught my students the Comm Comp approach but not the Pumping Lemma. Could they prove that EQO is not regular. For one thing, could they prove

EQO ≤ EQ  ?

Or show that this CANNOT be done.

Anyone know?

One could also study the structure of the degrees induced by the equiv classes.
If this has already been done, let me know in the comments.









Lessons from the Nobel Prizes

Thu, 12 Oct 2017 12:52:00 +0000

We've had a big week of awards with the Nobel Prizes and the MacArthur "Genius" Fellows. The MacArthur Fellows include two computer scientists, Regina Barzilay and Stefan Savage, and a statistician Emmanuel Candès but no theoretical computer scientists this year.

No computer scientists among the Nobel Laureates either but technology played a large role in the chemistry and physics prize. The chemistry prize went for a fancy microscope that could determine biomolecular structure. The LIGO project that measures extremely weak gravitational waves received the physics prize.

In a sign of the times, Jeffrey Hall, one of the medical prize recipients, left science due to lack of funding.

The economics prize went to Richard Thaler who described how people act irrationally but often in predictable ways such as the endowment effect that states the people give more value to an object they own versus one they don't currently have. The book Thinking Fast and Slow by 2002 Laureate Daniel Kahneman does a great job describing these behaviors.

While at Northwestern I regularly attended the micro-economics seminars many of which tried to give models that described the seemingly irrational behaviors that researchers like Thaler brought to light. My personal theory: Humans evolved to have these behaviors because while they might not be the best individual choices they make society better overall.



Michael Cohen

Mon, 09 Oct 2017 16:14:00 +0000

When I first saw posts about Michael Cohen (see here, here, here) I wondered

is that the same Michael Cohen who I knew as a HS student?

It is.  I share one memory.

Michael Cohen's father is Tom Cohen, a physics professor at UMCP.  They were going to a Blair High School Science fair and I got a ride to it (I had some students presenting at it.) In the car with Tom and Michael, Michael began telling is dad that his dad's proofs were not rigorous enough. I was touched by the notion that father and son could even have such a conversation.

Were Tom's proofs rigorous? I suspect that for Physics they were. But the fact that Michael could, as a high school student, read his dad's paper and have an opinion on it, very impressive. And very nice.

Michael was brilliant. It's a terrible loss.



Is the Textbook Market doomed?

Thu, 05 Oct 2017 17:35:00 +0000

STORY ONE: I always tell my class that its OKAY if they don't have the latest edition of the textbook, and if they can find it a  cheap, an earlier edition (often on Amazon, sometimes on e-bay), that's fine.  A while back at the beginning of a semester I was curious if the book really did have many cheap editions so I typed in the books name. I found a free pdf copy as the fourth hit. This was NOT on some corner of the dark web. This was easy to find and free. There were a few things not quite right about it, but it was clearly fine to use for the class. I wanted to post this information on the class website but my co-teacher was worried we might get in trouble for it, and he pointed out that the students almost surely already know, so we didn't. (I am sure thats correct. When I've discussed this issue with people, they are surprised I didn't already know that textbooks are commonly on the web, easy to find.) STORY TWO: I know someone who is thinking of writing a cheap text for a CS course. It will only be $40.00. That is much cheaper than the cost of a current edition of whats out there, and competitive with the used editions, but of course much more expensive than free. I think once students start getting used to free textbooks, even $40.00 is a lot. STORY THREE (What I do): For discrete math we had slides on line, videos of the lectures on line, and some notes on line. For smaller classes I have my own notes on line. The more I teach a course the better then notes get as I correct them, polish them, etc, every time I teach.  Even so, the notes are very good if you've gone to class but not very good if you haven't (that is not intentional- is more a matter of, say, my notes not having actual pictures of DFA's and NFA's).  I have NO desire to polish the notes more and make a book out of them.  Why do some people have that urge? I can think of two reason though I am sure there are more: (1) To make money. If you get a text out early in a field then this could work (I suspect CLR algorithms text made money). I wonder if Calc I books still make money given how many there are. But in any case, this motivation is now gone--- which is one of the points of this post.  (2) You feel that your way to present (say) discrete math is SO good that others should use it also!  But now you can just post a book or notes on the web, do a presentation at SIGCSE or other comp-ed venues. You don't need to write a textbook. (Personally I think this is a bit odd anyway--- people should have their own vision for a course. Borrowing someone else's seems strange to me.) DEATH SPIRAL: Books cost a lot, so students buy them cheap or get free downloads, so the companies does not make money so they raise the price of the book, so students buy them cheap...(I"m not going to get into whose fault this is or who started it, I'm just saying that this is where we are now.) With books either cheap-used or free, how will the textbook market survive? Or will it? Asking around I got the following answers 1) There will always be freshman who don't know that books can be cheap or free. This might help with Calc I and other first-year courses, but not beyond that. 2) There will always be teachers who insist the students buy the latest edition so that they can assign problems easier, e.g., `HW is page 103, problems 1,3,8  and page 105 problems 19 and 20. This will help the textbook publishers in that window between the new edition coming out and the book being scanned in. Is that a long window? 3) Some textbooks now come with added gizmos- codes on the web to get some stuff. For the teachers there may be online quizzes. Unfortunately this makes the books cost even more. I personally never fou[...]



Monty Hall (1921-2017) and His Problem

Sun, 01 Oct 2017 18:44:00 +0000

Monty Hall passed away yesterday, best known for co-creating and hosting the game show Let's Make a Deal, a show I occasionally watched as a kid. To the best of my knowledge he's never proven a theorem so why does he deserve mention in this blog?

For that we turn back the clock to 1990 when I was a young assistant professor at Chicago, more than a decade before this blog started, even before the world-wide web. The Chicago Tribune was a pretty good newspaper in those days before Craigslist. Nevertheless, the Sunday Tribune, as well as many other papers across the country, included Parade, a pretty fluffy magazine. Parade had (and still has) a column "Ask Marilyn" written by Marilyn vos Savant, who does not hide the fact that she had the world's highest IQ according the record books in the 1980's.

In 1990, vos Savant answered the following question in her column. Think about the answer if you haven't seen it before.
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
This is the kind of deal Monty Hall might have made on his show and so his name got attached to the problem in a 1976 paper in the American Statistician. Marilyn vos Savant claimed it was an advantage to switch. Many mathematicians at the time wrote into Parade arguing this was wrong--either way you have a 50% chance of winning. Even several of my fellow colleagues initially believed it made no difference to switch. Who was this low-brow magazine columnist to say otherwise? In fact, Marilyn was right.

Here is my simple explanation: If you make the commitment to switch, you will win if you pick a goat in the first round, a 2/3 chance of happening. Thinking it makes no difference is a fallacy in conditional probability, not unlike Mossel's Dice Paradox.

Monty Hall himself ran an experiment in his home in 1991 to verify that Marilyn was correct, though modulo the assumption that the host would always offer to make the switch and that everything was chosen uniformly.

Thanks to Bill Gasarch and Evan Golub for some useful details and links. Bill says "history being history, Monty Hall will be remembered as a great mathematician working in Probability." Maybe not, but it does get him remembered in the computational complexity blog.



Tragic Losses

Thu, 28 Sep 2017 14:20:00 +0000

I'd like to remember two young people who's lives were taken way too early. I didn't know either well but both played large roles in two different communities. Michael Cohen Michael Cohen, a young researcher in theoretical computer science, passed away. He's had a number of great algorithmic results most notably his solely authored paper giving a polynomial-time algorithm to construct Ramanujan graphs. Luca Trevisan and MSR give their remembrances. Update (10/5): See also Scott Aaronson, which include comments form Michael's mother and father and a Daily Cal article. Scout Schultz, in a story that made national news, studied computer engineering at Georgia Tech. On Saturday September 16th Scout was shot by a member of the Georgia Tech campus police. A vigil was held the following Monday, quite peaceful until a splinter group (mostly not Georgia Tech students) broke off, marched to the Georgia Tech police department and set a police car on fire.  Scout Schultz The death and its aftermath have shooken us all up at Georgia Tech. What has impressed me during this times is the strength of the Georgia Tech student body. Instead of focusing on blame, they have come together to remember Scout, a leader of the LGBQT community on campus. Being in a liberal city in a conservative state, the politics of the student body is quite mixed, but it doesn't divide the students, rather it brings them together. There's hope yet. [...]



Science fiction viewers used to embrace diversity (or did they) and now they don't (or do they)

Mon, 25 Sep 2017 02:01:00 +0000

(This post is inspired by the choice of a female to be the next Doctor on the TV show Dr. Who. Note that you can't say `the next Dr. Who will be female' since Dr. Who is not the name of the character. The name has not been revealed. Trivia: The first Dr. Who episode was the same day Kennedy was shot.) I give a contrast and then say why it might not be valid: Star Trek- The Original Series. 1966. There is a black female communications officer, a Russian officer and an Asian officer. And Science Fiction Viewers EMBRACED and APPROVED of this (for the time) diversity. Modern Time: A black Storm Trooper in Star Wars VII (see here), a black Jimmy Olsen  in Supergirl (see here), female Ghostbusters (see here), a female Doctor on Dr. Who (see here and here) , and even the diversity of ST-Discovery (see here) have upset science fiction viewers. So what happened in 50 year? Now I say why this contrast might not be valid.  All items here are speculative, I welcome comments that disagree intelligently. Or agree intelligently. Or raise points about the issue. 1) Science Fiction fans aren't racists and anti-women, they just don't like change. Star Trek: The Original Series didn't have an original cannon to violate. Having a black Captain (ST:DSN) or a female captain (ST:VOY) was a matter of NEW characters and I don't recall any objections. (Were there objections?) If in  the ST reboot they made Captain Kirk black, I suspect there would be objections which the objectors would claim are not racist. Would they be? 2) While the fans that are upset get lots of coverage, they might be the minority. I sometimes see more stuff on the web arguing against the racism then the racism itself.  (A friend of mine in South Carolina told me that whenever a Confederate monument is about to be taken down the SAME 12 people show up to protest but get lots of coverage). 3) Science Fiction has gotten much more mainstream, so the notion that `science fiction viewers now do BLAH' is rather odd since its no longer a small community. 4) In 1966 there was no internet (not even in the Star Trek Universe!!) for fans and/or racists to vent their anger. 5) Some of the objections have valid counterparts: "I don't mind Jimmy Olsen being black, I mind him being so handsome, whereas in the Superman Cannon he is not." (Counter: some of the objections are repulsive:; "I don't mind Jimmy Olsen being black, I mind him being a love interest for Supergirl". Gee why is that?) 5a) Another `valid' one `storm troopers were all cloned from ONE white guy so there cannot be a black stormtrooper'.  Racism hiding behind  nitpicking? Actual nitpicking? 6) I give the fans back in 1966 too much credit- it was the showrunners who embraced diversity. The fans-- did they care? 6a) I give the showrunners to much credit. ALL Klingons are war-like, ALL Romulans are arrogant, ALL Vulcans are logical (except during Pon Farr), In the more recent shows like ST-TNG ALL  Ferengi are greedy. So the show accepts that stereotypes can be true. 6b) Women were not portrayed that well in the star trek universe, even in the more recent shows.  See 15 real terrible moments for women on Star Trek 7) Even the 1966 ST  was not as diverse as I make it out to be. I doubt it would pass the Bechdel test two other points of interest 1) In the 1960's Science Fiction was sometimes  used as a way to talk about current issues since talking about them directly would not have been allowed. We can't really talk about real racism in a TV show so we'll have an alien race where they are all half-black, half-white, but di[...]



Acronyms and PHP

Thu, 21 Sep 2017 23:46:00 +0000

Whenever I teach discrete math and use FML to mean Formula the students laugh since its a common acroynm for  Fuck My Life. Now they laugh, and I say I know why you are laughing, I know what it means  and they laugh even harder.

BUT it got me thinking: Pigeonhole Principle! There are more things we want short acroynms for then there are short acroynms. Below are some I thought of. I am sure there are others, in fact I am sure there are websites of such, but I wanted to see which ones I just happen to know.

AMS- American Math Society and much much more:see here

DOA-

Dead on Arrival

Department of Aging. Scary!

ERA-

Earned Run Average in Baseball,

Equal Rights Amendment in politics

PCP-

Phencyclidine, a drug that you should never take.

Prob. Checkable Proofs. Obscure to the public but not to us.

ADDED LATER: A reader noted Post Correspondence Problem, a good example of a natural undecidable problem.

IRA- 

Irish Republic Army

Internal Retirement Account

Several companies have had rumors they fund terrorism because they were giving their employees IRA's. The headline `Company X funds IRA's' could be misunderstood.


SAT-

Standard Aptitute Test

Satisfiability (of Boolean Formulas) Obscure to the public but not to us. Actually it may get less obscure as more ``proofs'' resolving P vs NP come out.

SJW

Single Jewish Female (in classified ads- more on that later). I think SJF is more common.

Social Justice Warrior (sounds like a good thing but might not be)

Classified ads are a source of many acronyms which can be used to teach combinatorics.

{S,M,W,D,G}{B,C,H,J,W}{M,F}


S-single, M-married, W-widowed, D-Divorced, G-Gay (this one I've seen alone making me wonder
about S/M/W/D? I've also seen four-letter acronyms to disambiguate).

B- black, C-Christian, H-Hispanic,  J-Jewish, W-White.

M,F- Male, Female, though I am sure there are ways to say other genders.

Great for combinatorics! especially if you add in other ones (like BD)

WTF-

Wisconsin  Tourism Federation

You know what else it means so I won't say it (this is a G-rated blog). When I first saw it I thought `what the fuck?- how could they have screwed up so badly?'


TEACHING TOOL- when teaching PHP (Pigeon hole Principle, not the language PHP which stands for Hypertex PreProcessing, not quite in order, or Personal Home Page) you can use the the fact that

number of concepts GREATER THAN  number of 3-letter combos

leads to some 3-letter combos will be used more than once.




A problem I thought was interesting- now...

Mon, 18 Sep 2017 04:36:00 +0000

On Nate Silver's page he sometimes (might be once a week) has a column edited by Oliver Roeder of problems. Pretty much math problems though sometimes not quite. Some are math problems that I have seen before (e.g., hat problems). I don't bother submitting since that would just be goofy. I would be  ringer. Some are math problems that I have not seen before, I try to do, I fail, but read the answer and am enlightened. I call that a win. But some are math problems that I have not seen before, I try to do, I fail, but when I see the solution its a computer simulation or something else that isn't quite as interesting as I had hoped. I describe one of those now; however, I ask if it can be made more interesting. The problems is from this column: here I paraphrase:  Let A be the numbers {1,2,3,...,100}.  A sequence is nice if  (1) it begins with any number in A, (2) every number is from A and is either a factor of multiple of the number just before it, and (3) no number can appear more than once.  Find the LONGEST nice sequence Example of a nice sequence:  4, 12, 24, 6, 60, 30, 10, 100, 25, 5, 1, 97 I worked on it 1) By hand I came up with a nice sequence of length 42. This was FUN! You can either have fun trying to find a long nice sequence or you can look at mine here. 2) I tried to prove that it was optimal, hoping that either I would find its optimal or be guided to a longer sequence. Neither happened. More important is that this was NOT FUN. 3) I looked forward to the solution that would be in the next column and would be enlightening.  4) The next column, which did have the solution, is here! The answer was a sequence of length 77 found by a program that also verified there was no longer sequence. The sequence itself was mildly enlightening in that I found some tricks I didn't know about, but the lack of a real lower bound proof was disappointing. They mentioned that this is a longest path problem (Graph is {1,..,100} edges are between numbers that are either multiples of factors) and that such problems are NP-complete. That gave the impression that THIS problem is hard since its a case of an NP-complete problem. Thats not quite right- its possible that this type of graph has a quick solution. But I would like YOU the readers to help me turn lemon into lemonade. 1) Is there a short proof that 77 is optimal?  Is there a short proof that (say) there is no sequence of length 83.  I picked 83 at random.  One can easily prove there is no sequence of length 100. 2) Is the following problem in P or NPC or if-NPC-then-bad-thing-happen: Given (n,k) is there a nice sequence of {1,...,n} of length at least k. (n is in binary, k is in unary, so that the problem is in NP.) I suspect not NPC. 3) Is the following problem in P or NPC or ... Given a set of numbers A and a number k, is there a nice sequence of elements of A of length at least k (k in unary). Might be NPC if one can code any graph into such a set. Might be in P since the input has a long length. 4) Is the following solvable: given a puzzle in the Riddler, determine ahead of time if its going to be interesting? Clyde Kruskal and I have a way to solve this- every even numbered column I read the problem and the solution and tell him if its interesting, and he does the same for odd number columns. [...]



Random Storm Thoughts

Thu, 14 Sep 2017 16:36:00 +0000

It's Monday as I write this post from home. Atlanta, for the first time ever, is in a tropical storm warning. Georgia Tech is closed today and tomorrow. I'm just waiting for the power to go out. But whatever will happen here won't even count as a minor inconvenience compared to those in Houston, the Caribbean and Florida. Our hearts goes out to all those affected by these terrible storms.

Did global warming help make Harvey and Irma as dangerous as they became? Hard to believe we have an administration that won't even consider the question and keeps busy eliminating "climate change" from research papers. Here's a lengthy list cataloging Trump's war on science. 

Tesla temporarily upgraded to its Florida Owners' cars giving them an extra 30 miles of battery life. Glad they did this but it begs the question why Tesla restricted the battery life in the first place. Reminds of when in the 1970's you wanted a faster IBM computer, you paid more and an IBM technician would come and turn the appropriate screw. Competition prevents software-inhibitors to hardware. Who will be Tesla's competitors?

During all this turmoil the follow question by Elchanan Mossel had me oddly obsessed: Suppose you flip a six-sided die. What is the expected number of dice throws needed until you get a six given that all the throws ended up being even numbers? My intuition was wrong though when Tim Gowers falls into the same trap I don't feel so bad. I wrote a short Python program to convince me, and the program itself suggested a proof.

Updates on Thursday: I never did lose power though many other Georgia Tech faculty did. The New York Times also covered the Tesla update. 



The Scarecrow's math being wrong was intentional

Mon, 11 Sep 2017 01:49:00 +0000

In 2009 I had a post about Movie mistakes (see here). One of them was the Scarecrow in The Wizard of Oz after he got a Diploma (AH- but not a brain) he said

The sum of the square roots of any two sides of an isoscles triangle is equal to the square root of the remaining side. Oh joy! Rapture! I have a brain!

I wrote that either this mistake was (1) a mistake, (2) on purpose and shows the Scarecrow really didn't gain any intelligence (or actually he was always smart, just not in math), or (3) It was Dorothy's dream so it Dorothy was not good at math.

Some of the comments claimed it was (2).  One of the comments said it was on the audio commentary.

We now have further proof AND a longer story: In the book Hollywood Science: The Next Generation, From Spaceships to Microchips (see here) they discuss the issue (page 90). The point to our blog as having discussed it (the first book not written by Lance or Lipton-Regan to mention our blog?) and then give evidence that YES it was intentional.

They got a hold of the original script. The Scarecrow originally had a longer even more incoherent speech that was so over the top that of course it was intentional. Here it is:

The sum of the square roots of any two sides of an isosceles triangle is equal to the square root of the remaining side: H-2-O Plus H-2-S-O-4 equals H-2-S-O-3 using pi-r-squared as a common denominator Oh rapture! What a brain!

While I am sure the point was that the Scarecrow was no smarter, I'm amused at the thought of Dorothy not knowing math or chemistry and jumbling them up in her dream.



Statistics on my dead cat policy- is there a correlation?

Thu, 07 Sep 2017 15:34:00 +0000

When I teach a small (at most 40) students I often have the dead-cat policy for late HW:

HW is due on Tuesday. But there may be things that come up that don't quite merit a doctors note, for example your cat dying, but are legit for an extension. Rather than have me judge every case you ALL have an extension until Thursday, no questions asked. Realize of course that the hw is MORALLY due Tuesday. So if on Thursday you ask, for an extension I will deny it on the grounds that I already gave you one. So you are advised to not abuse the policy. For example, if you forget to bring your HW in on thursday I will not only NOT give the extension, but I will laugh at you.

(I thought I had blogged on this policy before but couldn't find the post.)

Policy PRO: Much less hassling with late HW and doctors notes and stuff

Policy CON: The students tend to THINK of Thursday as the due date.

Policy PRO: Every student did every HW.

Caveat: The students themselves tell me that they DO start the HW on Monday night, but if they can't quite finish it they have a few more days. This is OKAY by me.

I have always thought that there is NO correlation between the students who tend to hand in the HW on Thursday and those that do well in the class.  In the spring I had my TA keep track of this and do statistics on it.

The class was Formal Lang Theory (Reg Langs, P and NP, Computability. I also put in some communication complexity. I didn't do Context free grammars.)  There were 43 students in the class. We define a students morality (M) as the the number of HW they hand in on Tuesday. There were 9 HW.

3 students had M=0

12 students had M=1

9 students had M=2

5 students had M=3

4 students had M=4

4 students had M=5

1 student had M=6

1 student had M=7

2 students had M=8

2 students had M=9

We graphed grade vs morality (see  here)

The Pearson Correlation Coefficient is 0.51. So some linear

The p-value is 0.0003 which means the prob that there is NO correlation is very low.

My opinion:

1) The 5 students with M at least 7 all did very well in the course.This seems significant.

2) Aside from that there is not much correlation.

3) If I tell my next semesters class  ``people who handed the HW in on tuesday did well in the class so you should do the same'' that would not be quite right- do the good students hand thing in on time, or does handing things in on time make you a good student? I suspect the former.

4) Am I surprised that so many students had such low M scores? Not even a little.













Rules and Exceptions

Mon, 04 Sep 2017 14:47:00 +0000

As a mathematician nothing grates me more than the expression "The exception that proves the rule". Either we bake the exception into the rule (all primes are odd except two) or the exception in fact disproves the rule.

According to Wikipedia, "the exception that proves the rule" has a legitimate meaning, a sign that says "No parking 3-6 PM" suggests that parking is allowed at other times. Usually though I'm seeing the expression used when one tries to make a point and wants to dismiss evidence to the contrary. The argument says that if exceptions are rare that gives even more evidence that the rule is true. As in yesterday's New York Times
The illegal annexation of Crimea by Russian in 2014 might seem to prove us wrong. But the seizure of Crimea is the exception that proves the rule, precisely because of how rare conquests are today.
Another example might be the cold wave of 2014 which some say support the hypothesis of global warming because such cold waves are so rare these days.

How about the death of Joshua Brown, when his Tesla on autopilot crashed into a truck. Does this give evidence that self-driving cars are unsafe, or in fact they are quite safe because such deaths are quite rare? That's the main issue I have with "the exception that proves the rule", it allows two people to take the same fact to draw distinctly opposite conclusions.



NOT So Powerful

Thu, 31 Aug 2017 13:32:00 +0000

Note: Thanks to Sasho and Badih Ghazi for pointing out that I had misread the Tardos paper. Approximating the Shannon graph capacity is an open problem. Grötschel, Lovász and Schrijver approximate a related function, the Lovász Theta function, which also has the properties we need to get an exponential separation of monotone and non-monotone circuits. Also since I wrote this post, Norbert Blum has retracted his proof. Below is the original post. A monotone circuit has only AND and OR gates, no NOT gates. Monotone circuits can only produce monotone functions like clique or perfect matching, where adding an edge only makes a clique or matching more likely. Razborov in a famous 1985 paper showed that the clique problem does not have polynomial-size monotone circuits. I choose Razborov's monotone bound for clique as one of my Favorite Ten Complexity Theorems (1985-1994 edition). In that section I wrote Initially, many thought that perhaps we could extend these [monotone] techniques into the general case. Now it seems that Razborov's theorem says much more about the weakness of monotone models than about the hardness of NP problems. Razborov showed that matching also does not have polynomial-size monotone circuits. However, we know that matching does have a polynomial-time algorithm and thus polynomial-size nonmonotone circuits. Tardos exhibited a monotone problem that has an exponential gap between its monotone and nonmonotone circuit complexity.  I have to confess I never actually read Éva Tardos' short paper at the time but since it serves as Exhibit A against Norbert Blum's recent P ≠ NP paper, I thought I would take a look. The paper relies on the notion of the Shannon graph capacity. If you have a k-letter alphabet you can express kn many words of length n. Suppose some pairs of letters were indistinguishable due to transmission issues. Consider an undirected graph G with edges between pairs of indistinguishable letters. The Shannon graph capacity is the value of c such that you can produce cn distinguishable words of length n for large n. The Shannon capacity of a 5-cycle turns out to be the square root of 5. Grötschel, Lovász, Schrijver use the ellipsoid method to approximate the Shannon capacity in polynomial time. The Shannon capacity is anti-monotone, it can only decrease or stay the same if we add edges to G. If G has an independent set of size k you can get kn distinguishable words just by using the letters of the independent set. If G is a union of k cliques, then the Shannon capacity is k by choosing one representation from each clique, since all letters in a clique are indistinguishable from each other. So we have the largest independent set is at most the Shannon capacity is at most the smallest clique cover. Let G' be the complement of a graph G, i.e. {u,v} is an edge of G' iff {u,v} is not an edge of G. Tardos' insight is to look at the function f(G) = the Shannon capacity of G'. Now f is monotone in G. f(G) is at least the largest independent set of G' which is the same as the largest clique in G. Likewise f(G) is bounded above by the smallest partition into independent sets which is the same as the chromatic number of G since all the nodes with the same color form an independent set. We can only approximate f(G) but by careful rounding we can get a monotone polynomial-time computable function (and thus polynomial-size AND-OR-NOT circuits) that sits between th[...]



either pi is algebraic or some journals let in an incorrect paper!/the 15 most famous transcendental numbers

Mon, 28 Aug 2017 02:52:00 +0000

Someone has published three papers claiming that π is 17 -8*sqrt(3) which is really =3.1435935394... Someone else has published eight papers claiming π is (14 - sqrt(2))/4 which is really 3.1464466094... The first result is closer, though I don't think this is a contest that either author can win. Either π is algebraic, which contradicts a well known theorem, or some journals accepted some papers with false proofs. I also wonder how someone could publish the same result 3 or 8 times. I could write more on this, but another blogger has done a great job, so I'll point to it: here DIFFERENT TOPIC (related?) What are the 15 most famous transcendental numbers? While its a matter of opinion, there is an awesome website that claims to have the answer here. I"ll briefly comment on them. Note that some of them are conjectured to be trans but have not been proven to be. So should be called 12 most famous trans numbers and 3 famous numbers conjectured to be trans. That is a bit long (and as short as it is only because I use `trans') so the original author is right to use the title used. 1) pi YEAH (This is probably the only number on the list such that a government tried to legally declare its value, see here for the full story.) 2) e YEAH 3) Eulers contant γ which is the limit of (sum_{i=1}^n  1/i) -  ln(n). I read a book on γ  (see here) which had interesting history and math in it, but not that much about γ . I'm not convinced the number is that interesting. Also, not known to be trans (the website does point this out) 4) Catalan's number  1- 1/9 + 1/25 - 1/49 + 1/81  ...  Not known to be trans but thought to be. I had never heard of it until reading the website so either (a) its not that famous, or (b) I am undereducated. 5) Liouville's number 0.110001... (1 at the 1st, 2nd, 6th, 24th, 120th, etc - all n!- place, 0's elsewhere) This is a nice one since the proof that its trans is elementary. First number ever proven Trans. Proved by the man whose name is on the number. 6) Chaitian's constant which is the prob that a random TM will halt. See here for more rigor. Easy to show its not computable, which implies trans.  It IS famous. 7) Chapernowne's number which is 0.123456789 10 11 12 13 ... . Cute! 8) recall that ζ(2) = 1 + 1/4 + 1/9 + 1/6 + ... = π2/6 ζ(3) = 1 + 1/8 + 1/27 + 1/64 + ... known as Apery's constant, thought to be trans but not known. It comes up in Physics and in the analysis of random minimal spanning trees, see here which may be why this sum is here rather than some other sums. 9) ln(2). Not sure why this is any more famous than ln(3) or other such numbers 10) 2sqrt(2) - In the year 1900 Hilbert proposed 23 problems for mathematicians to work on (see here for the problems, and see here  for a joint book review of two books about the problem, see  here for a 24th problem found in his notes much later). The 7th problem  was to show that ab is trans when a is rational and b is irrational (except in trivial cases). It was proven by Gelfond and refined by Schneider (see here). The number 2sqrt(2) is sometimes called Hilbert's Number. Not sure why its not called the Gelfond-Schneider number. Too many syllables? 11) eπ  Didn't know this one. Now I do! 12) πe (I had a post about compari[...]



Kurtz-Fest

Thu, 24 Aug 2017 13:12:00 +0000

Stuart Kurtz turned 60 last October and his former students John Rogers and Stephen Fenner organized a celebration in his honor earlier this week at Fenner's institution, the University of South Carolina in Columbia.

Stuart has been part of the CS department at the University of Chicago since before they had a CS department and I knew Stuart well as a co-author, mentor, boss and friend during my 14+ years at Chicago. I would have attended this weekend no matter the location but a total eclipse a short drive from Atlanta (which merely had 97% coverage) certainly was a nice bonus.

Stuart Kurtz brought a logic background to computational complexity. He's played important roles in randomness, the structural properties of reductions, especially the Berman-Hartmanis isomorphism conjecture, relativization, counting complexity and logics of programs. I gave a talk about Stuart's work focusing on his ability to come up with the right definitions that help drive results. Stuart defined classes like Gap-P and SPP that have really changed the way people think about counting complexity. He changed the way I did oracle proofs, first trying to create the oracle first and then prove what happens as a consequence instead of the other way around. It was this approach, focusing on an oracle called sp-generic, that allowed us to give the first relativized world where the Berman-Hartmanis conjecture held.