Preview: Computational Complexity

Last Build Date: Fri, 19 Jan 2018 12:37:42 +0000

A Sensitive Game

Consider the following game on n bits among three players, Alice, Bob and Carol. The game works as follows: Carol picks a bit position and Alice sets that bit to 0 or 1. Carol then picks another unused bit position and Alice sets that bit as well. This repeats until the last bit which Carol gets to set herself. The bits are then revealed to Bob who has to give a set of bit positions that includes the bit Carol set at the end. Alice and Bob can work out a strategy ahead of time with the goal of minimizing the size of the set.

If Carol can force Bob to give a set of n^{ε} bits for some ε>0, this would prove the sensitivity conjecture! Basically a family of functions that give a counterexample to the sensitivity conjecture would give a strategy for Alice and Bob that yields a set of size n^{o(1)}. You can find the full proof in the papers above.

How well can Alice and Bob do? They can use the following strategy to achieve n^{1/2}: Break up the input positions into n^{1/2} groups of n^{1/2} bits each. Whenever Carol picks a bit position, Alice sets that bit to zero unless it is the last bit in that group in which case she sets it to one. Carol can set the last bit in some group anyway she wants. Bob will either see every group having one 1 and will give the set of the positions of all the 1's, or Bob will see a group of all zeros and give the positions in that

group.

Mario Szegedy uses error-correcting codes to improve the upper bound to O(n^{0.4732}). A Georgia Tech undergraduate DeVon Ingram improves the upper bound to O(n^{0.4696}) by generalizing Szegedy's techniques. Ingram's proof comes down to finding a one-to-one function mapping subsets of {1,...,15} of size 4 to perfect codes of length 15 with the property that the bits of the code are 1 for the indices of the subset. Perhaps one could find a clever mapping that has this property but Ingram wrote code using a matching algorithm. A true computer assisted proof. Longer code lengths alas won't yield direct improvements.

The connection to sensitivity doesn't go both ways. An upper bound of n^{o(1) }wouldn't necessarily yield a counterexample to the sensitivity conjecture though would tell us the limitation of this game approach. Nevertheless I find it cool that a game that doesn't talk about functions at all could help settle an important question about them.

Thu, 18 Jan 2018 17:47:00 +0000

Last month I posted about the sensitivity conjecture and today I would like to talk about an interesting game developed by Gilmer, Koucký and Saks and independently by Drucker that could yield a proof.Consider the following game on n bits among three players, Alice, Bob and Carol. The game works as follows: Carol picks a bit position and Alice sets that bit to 0 or 1. Carol then picks another unused bit position and Alice sets that bit as well. This repeats until the last bit which Carol gets to set herself. The bits are then revealed to Bob who has to give a set of bit positions that includes the bit Carol set at the end. Alice and Bob can work out a strategy ahead of time with the goal of minimizing the size of the set.

If Carol can force Bob to give a set of n

How well can Alice and Bob do? They can use the following strategy to achieve n

group.

Mario Szegedy uses error-correcting codes to improve the upper bound to O(n

The connection to sensitivity doesn't go both ways. An upper bound of n

Donald Knuth Turns 80 years and 6 days

LANCE: One of us should blog about Don Knuth turning 80.

BILL: How about both of us, sep posts?

Lance had his post here. Now its my turn.

Donald Knuth was the first person (roughly- history is not as definite as mathematics) to use mathematics to analyse algorithms. This may seem like an obvious thing to do but that's easy to say in retrospect. And he never lost focus: He may have to use some hard math to analyze algorithms but his goal always was to analyze algorithms. He didn't get fascinated with some obscure aspect of math like, say, surreal numbers, and go write a book on it. Oh wait, he did (see here). But, for the most part everything he did was towards faster algorithm and faster typesetting.

In an early draft of a book review of *Companion \to the Papers of Donald Knuth *I wrote at the end of it

* Donald Knuth has been called the father of Theoretical Computer Science and hence his thoughts are worth knowing*

I usually email my reviews to the author so they can make sure I didn't say anything stupid or in some very rare cases have a rebuttal. In Knuth's case I postal mailed it (ask your grandparents what that is) since he does not use email. He mailed back the hardcopy with some corrections in pencil. One of them was to cross out

*father of theoretical computer science*

and replace it with

*father of algorithmic analysis.*

I made the correction. That is how he views himself and it would be odd to argue the point.

As a tribute to him I have gathered up all of the book reviews in my column of books by him (between 10 and 12 depending on how you count) and books clearly inspired by him (1 book- A=B) into one file which I point to here.

Tue, 16 Jan 2018 16:41:00 +0000

Celebrating Donald Knuth's 80th birthday, or 80 years + 7 days birthday seems odd. Should we use powers of 2? Hmm- too few, just 32 and 64 really. And having a 32-year celebration for someone is unusual. How about numbers that end in 0 in base 8. 64 would be 100, 72 would 110, 80 would be 120 so AH- we would be celebrating! So lets Celebrate!LANCE: One of us should blog about Don Knuth turning 80.

BILL: How about both of us, sep posts?

Lance had his post here. Now its my turn.

Donald Knuth was the first person (roughly- history is not as definite as mathematics) to use mathematics to analyse algorithms. This may seem like an obvious thing to do but that's easy to say in retrospect. And he never lost focus: He may have to use some hard math to analyze algorithms but his goal always was to analyze algorithms. He didn't get fascinated with some obscure aspect of math like, say, surreal numbers, and go write a book on it. Oh wait, he did (see here). But, for the most part everything he did was towards faster algorithm and faster typesetting.

I usually email my reviews to the author so they can make sure I didn't say anything stupid or in some very rare cases have a rebuttal. In Knuth's case I postal mailed it (ask your grandparents what that is) since he does not use email. He mailed back the hardcopy with some corrections in pencil. One of them was to cross out

As a tribute to him I have gathered up all of the book reviews in my column of books by him (between 10 and 12 depending on how you count) and books clearly inspired by him (1 book- A=B) into one file which I point to here.

Donald Knuth Turns Eighty

Donald Knuth is known for many things, important algorithms, the brilliant book series The Art of Computer Programming, TeX, the many awards he's won and the award named after him. In my favorite story, back in the 70's when Knuth started working on Volume 4 (still in progress), he wanted a good name for those hard problems that Cook and Karp found. His candidates "Herculean", "Formidable" and "Arduous" didn't get much traction so he ran a contest which had some interesting suggestions before reluctantly going with the name "NP-complete" that came out of a group from MIT.

In his SIGACT News article that described the contest, Donald Knuth was so sure that these hard problems would remain hard, he puts everything on the line, offering anyone who could prove P = NP a live turkey. May Don Knuth and his turkey continue to celebrate many more birthdays.

Wed, 10 Jan 2018 13:26:00 +0000

We've kept this blog going long enough that we start repeating our celebrations. Ten years ago Bill celebrated Don Knuth's 70th Birthday and today Donald Knuth turns 80. While he celebrates in Piteå, Sweden with its less than 4.5 hours of daylight, we wish him a happy birthday from stateside.
Looking back in this blog, in 2015 I wrote about the history of the history of computing including this wonderful talk by Knuth, Let's Not Dumb Down the History of Computer Science.

allow="encrypted-media" allowfullscreen="" frameborder="0" gesture="media" height="315" src="https://www.youtube.com/embed/gAXdDEQveKw" width="560">
Donald Knuth is known for many things, important algorithms, the brilliant book series The Art of Computer Programming, TeX, the many awards he's won and the award named after him. In my favorite story, back in the 70's when Knuth started working on Volume 4 (still in progress), he wanted a good name for those hard problems that Cook and Karp found. His candidates "Herculean", "Formidable" and "Arduous" didn't get much traction so he ran a contest which had some interesting suggestions before reluctantly going with the name "NP-complete" that came out of a group from MIT.

In his SIGACT News article that described the contest, Donald Knuth was so sure that these hard problems would remain hard, he puts everything on the line, offering anyone who could prove P = NP a live turkey. May Don Knuth and his turkey continue to celebrate many more birthdays.

A new largest prime found!

Mon, 08 Jan 2018 02:21:00 +0000

A new largest KNOWN prime has been discovered and its 23 million digits long. Nate Silver's website had an article about it (written by Oliver Roeder) here An article about why people do this is here Lance posted about finding large primes in 2006 here I'll just make some random comments 1) The prime is 277,232,917-1 2) The prime is not much bigger than the previous champion. 3) More generally, the graph (in Oliver Roeder's article) shows from 1600 to about 1951there was slow progress but since then there has been LOTS of progress. See the table in this article. I had wanted to say every year a new prime was found but, alas, not that simple a pattern. Even so, lots of new records. 4) I"ll list the reasons given for why people do this and my comments on them. a) Tradition! (Note to self- write a novelty song to the tune of Fiddler-on-the-roof's Tradition about why people work on various mathematical things) b) For the by-product of the quest. This one does make sense and I am sure has driven and helped test out some research. Reminds me of the spinoffs from going to the moon (see here). Contrary to what I heard as a kid, the orange-powder-drink Tang was not a spinoff. But there were many others. Of course- would these spinoffs have happened anyway? Hard to know. c) People collect rare and Beautiful items. I don't really see this one. Finding a large prime doesn't make its yours, it belongs to the ages! And I don't think people get their names on the prime either. The only prime that has someone's name on it is the famous Grothendieck prime which is 57. Oh well. There are sets of primes with peoples names on them: Mersenne primes, Gaussian primes (which are subsets of Gaussian integers so maybe shouldn't count), Eisenstein primes, and Sophie Germain primes. If you know of any other primes or set of primes named after someone, leave a comment please. d) For the glory! Maybe, but given how briefly people hold the record, fame is fleeting. e) To test the hardware. This one I didn't know! I'll quote the article as to why primes are good for this Why are prime programs used this way? They are intensely CPU and bus bound. They are relatively short, give an easily checked answer (when run on a known prime they should output true after their billions of calculations). They can easily be run in the background while other "more important" tasks run, and they are usually easy to stop and restart. f) To learn more about their distribution. The prime number theorem was conjectured from data. We have so many primes now that I wonder if a few more really help formulate conjectures g) For the money. The first person to get a ten-million digit prime gets $100,000. The first person to get a one billion digit prime gets $250,000. Wow! Except that the article must be a bit old since the $100,000 prize was claimed in 2009 (see here). Still, theres that one billion digit prize out there! 5) Mersenne primes are of the form 2^n-1. It is known that n must be prime for 2^n-1 to be prime (this is not hard). There are much faster primality testing algorithms for Mersenne primes than arb primes. But see next item. 6) While writing this blog post I looked up non-mersenne primes. It seems like the largest one is 10223*2^31172165 + 1 and was discovered in 2016. But of more interest- there is no Wikipedia page on non-Mersenne primes, there are some outdated pages that don't have the most recent information. As the kids say, its not a thing. 8) I'll add one more reason why people work on this, but its more of a tautology: People work on finding large primes because they can!. By contrast, finding VDW numbers is hard and likely to not make much progress. 9) I think that the most reason advances have come from computing power and not number theory. (if this is incorrect let me know with a polite comment) 10) Have their been spinoffs in either number [...]
Which of these Math acronyms are well known?

Fri, 05 Jan 2018 16:20:00 +0000

The last time I taught Grad Ramsey Theory there were very good math grads and ugrads in it. They used some acronyms - some I knew, some I didn't know (but know now). I am sure some are well known and some are now. I don't know which is which. Here is the list and comments
WLOG- Without Loss of Generality. This one I know and it seems well know-- When Googled the first page is all this definition. (Maybe I shouldn't use the term ``Googled''- I've heard that brand names don't like it when they become generic terms like `Googled'. Kleenex, Photoshop, Xerox had this fate. Their is a word for it- genericide)

ITOT- It turns out that. An Applied Math Prof used this in a course I had in 1977. I have not seen it used since then. I don't even use it.

BWOC- By Way of Contradiction. I thought this was better known. When I google it I got to this page which tells me it stands for:

Big Wolf on Campus (TV Show)

By Weight of Cement (Oil and Gas industry)

Big Women on Campus (GOOD- the counterpart to BMOC)

By Way of Contradiction (Given the other items on the list I'm surprised it made the cut)

Bob Wayne's Oil company

FTSOC- For the Sake of Contradiction. NOT, as Google told me, Fuck this Shit O'Clock (reminds me of when I use FML for Formula and the class tells me its means Fuck My Life)

WTS- This was a new one for me. It took a while to get it from context but it was Want to Show. Google gives Women in Transportation.

NTS- Need to show. Also a radio station and the National Traffic System.

I think the only one of these that is standard is WLOG. The rest I think could be useful. But I ask you- are any of these standard? Are there ones that are standard that I missed? Of course, the great thing about standards is that there are so many of them.

Complexity Year in Review 2017

Wed, 27 Dec 2017 18:40:00 +0000

Theorem of the year goes to the resolution of the dichotomy conjecture. I wrote about the conjecture in February and while the Feder et. al paper didn't hold up, two later papers seem to resolve the conjecture. A dichotomy theorem for nonuniform CSPs by Andrei Bulatov A Proof of CSP Dichotomy Conjecture by Dmitriy Zhuk I checked with experts in the field and at least one of these papers and more likely both ought to be correct. Runners up include two matching papers I posted about last month, Svensson and Tarnawski who give a quasi-NC algorithm for general graph matching and Anari and Vazirani who give a NC algorithm for matching on planar graphs. We also had the nice quasi-polynomial time algorithm for parity games by Calude, Jain, Khoussainov, Li and Stephan that I posted on last March. In last year's review we said "2016 will go down as a watershed year for machine learning" yet somehow it paled against 2017 with breakthroughs in chess, poker, astronomy not to mention continuous advances in machine translation, autonomous vehicles and everything else. Maybe next year ML can write the 2018 year in review. We had an awesome eclipse to remind us of the wonders of the world and almost made me forget about US politics. Computing keeps growing and how do we find the resources to train people from pre-school through college and throughout their lives? How much should we worry about the dominance of a handful of computing companies? Thanks to our guest posters Erin Chambers, Brittany Terese Fasy, Lori Ziegelmeier, Molly Fortnow and Periklis Papakonstantinou. We remember Ken Arrow, Corrado Bohm, Michael Cohen, Petr Hájek, Monty Hall, Maurice Nivat, Raymond Smullyan, Peter Wegner and Lotfi Zadeh. 2018 is just full of questions: What will the Internet look like post-net neutrality? How will the new tax code play out? Where will Amazon put HQ2? What will machine learning do next? What can quantum computers with 50 qbits accomplish? Will bitcoin move to $300K or 30 cents? And what great advances in complexity await us? [...]
Having Faith in Complexity

You don't need the full power of P = NP to break cryptography. I don't worry about quantum computers breaking RSA and related protocols. It won't sneak up on us--when (or if) quantum computing gets anywhere close to factoring large numbers, we'll figure out a strategy to change our protocols and to protect the information we already have. However if someone comes up with an algorithm tomorrow that cracks AES, we'll have a crisis on our hands as AES is so well used the algorithm is embedded into computer chips. Perhaps we can mitigate the damage before the algorithm spreads or at take our information off-line until we develop new solutions.

But what about blockchains, the technology that underlies cryptocurrencies such as Bitcoin. A blockchain consists of a series of transactions collected into sequence of blocks, where each block consists of a hash of the previous block with transactions themselves hashed and often encrypted with public-key cryptography. One would hope that breaking the cryptography would be caught quickly and we'd still have a legit record of transactions saved somewhere. The transactions themselves might be compromised especially if anonymity was built into the system.

Bitcoin itself, as I write this, has a total market cap of over $250 billion based fully on cryptography. The cryptography will probably hold up, Bitcoin investors have more to worry from bad implementations or the pop of a bubble of unrealistic expectations. But as we watch so many exciting advances in computing tackling challenges that we would never have expected to get solved, should we continue to build our economy on the hope that other advances won't happen?

Thu, 21 Dec 2017 15:14:00 +0000

I believe P ≠ NP as much as anyone else. Nevertheless should we worry about trust we put in complexity?You don't need the full power of P = NP to break cryptography. I don't worry about quantum computers breaking RSA and related protocols. It won't sneak up on us--when (or if) quantum computing gets anywhere close to factoring large numbers, we'll figure out a strategy to change our protocols and to protect the information we already have. However if someone comes up with an algorithm tomorrow that cracks AES, we'll have a crisis on our hands as AES is so well used the algorithm is embedded into computer chips. Perhaps we can mitigate the damage before the algorithm spreads or at take our information off-line until we develop new solutions.

But what about blockchains, the technology that underlies cryptocurrencies such as Bitcoin. A blockchain consists of a series of transactions collected into sequence of blocks, where each block consists of a hash of the previous block with transactions themselves hashed and often encrypted with public-key cryptography. One would hope that breaking the cryptography would be caught quickly and we'd still have a legit record of transactions saved somewhere. The transactions themselves might be compromised especially if anonymity was built into the system.

Bitcoin itself, as I write this, has a total market cap of over $250 billion based fully on cryptography. The cryptography will probably hold up, Bitcoin investors have more to worry from bad implementations or the pop of a bubble of unrealistic expectations. But as we watch so many exciting advances in computing tackling challenges that we would never have expected to get solved, should we continue to build our economy on the hope that other advances won't happen?

Monkey First!

*A company gets a contract to do the following: train a monkey to sit on a 10-foot pedestal and recite some passages of Shakespeare. After a week they announce they have made progress! They invite their investors to see what progress they have made! They unveil a curtain and there is... a 10-foot pedestal.*

This story was in an article about how Google does moonshots-- that is, high-risk, high-reward, innovative work. The article is here. (How the Atlantic makes money when they have stuff online is a mystery to me. Perhaps they do in a very innovative way.) The point is that its BAD to have tangible results (like the pedestal) that are not getting at the heart of the problem. So Google has various incentives to do the important stuff. Their slogan is MONKEY FIRST.

This also applies to our research. The following sequence of events is common:

1) Prove some scattered results.

2) Pedastal or Monkey? You could write up what you have, polish it, write up some nice LaTeX macros to make the writing of the paper easier OR you could try to find the unifying principle that would be hard, and might not work, but if it works that would be, as the kids say, Jawesome (Jaw-dropping awesome). The sad answer is that which you do might depend on when the next conference deadline is.

More generally there is a tension between safe do-able research(Pedestal) and high-risk, high-reweard research (Monkey). Is our incentive structure set up to encourage high-risk high-reward? The Tenure system is supposed to do it and it DOES in some cases, but not as much as it could since there are other factors (salary, promotion to full prof, grants).

Does the system encourage high-risk high-reward? Should it? Could we do better? What are your experiences? I have no answers (especially to the question of what are your experiences) so I welcome your comments.

Mon, 18 Dec 2017 03:18:00 +0000

The following story is not true nor has anyone claimed its true, but it has a point:This also applies to our research. The following sequence of events is common:

1) Prove some scattered results.

2) Pedastal or Monkey? You could write up what you have, polish it, write up some nice LaTeX macros to make the writing of the paper easier OR you could try to find the unifying principle that would be hard, and might not work, but if it works that would be, as the kids say, Jawesome (Jaw-dropping awesome). The sad answer is that which you do might depend on when the next conference deadline is.

More generally there is a tension between safe do-able research(Pedestal) and high-risk, high-reweard research (Monkey). Is our incentive structure set up to encourage high-risk high-reward? The Tenure system is supposed to do it and it DOES in some cases, but not as much as it could since there are other factors (salary, promotion to full prof, grants).

Does the system encourage high-risk high-reward? Should it? Could we do better? What are your experiences? I have no answers (especially to the question of what are your experiences) so I welcome your comments.

Our AI future: The Good and the Ugly

The Neural Information Process Systems conference held last week in Long Beach, California, sold out its 7500 registration slots in 12 days. NIPS, not long ago just another academic conference, has become a major machine learning job market where newly minted Ph.D.s earn north of $300,000 and top-ranked senior academics command multimillion-dollar, multiyear contracts."

AlphaZero, an offshoot of Google’s Go programs, learned chess given only the rules in just four hours (on 5000 tensor processing units) and easily beats the best human-designed chess programs. Check out this match against Stockfish.

allow="encrypted-media" allowfullscreen="" frameborder="0" gesture="media" height="315" src="https://www.youtube.com/embed/UcAfg9v_dDM" width="560">

Just a trend that machine learning often works better when humans just get out of the way.

The advances in machine learning and automation have a dark side. Earlier this week I attended the CRA Summit on Technology and Jobs, one of a series of meetings organized by Moshe Vardi on how AI and other computing technology will affect the future job market. When we talk about ethics in computer science we usually talk about freedom of information, privacy and fairness but this may be the biggest challenge of them all.

The most stark statistic: Contrary to what certain politicians may tell you, manufacturing output in the United States has never been higher, but manufacturing jobs have declined dramatically due to automation.

The changes have hit hardest for white middle-class less educated males. While this group usually doesn’t get much attention from academics, they have been hit hard, often taking less rewarding jobs or dropping out of the job market entirely. We're seeing many young people living with their parents spending their days playing video games and see a spike in suicides and drug use. Drug overdose is the now the leading cause of death of men under 50.

There are no easy solutions. Universal basic income won’t solve the psychological need a job plays in being a part of something bigger than oneself. In the end we'll need to rethink the educate-work-retire cycle towards more life-long learning and find rewarding jobs that go around automation. This all starts by having a government that recognizes these real challenges.

Wed, 13 Dec 2017 18:46:00 +0000

I don’t directly work in machine learning but one cannot deny the progress it has made and the effect it has on society. Who would have thought even a few years ago that ML would have basically solved face and voice recognition and translate nearly as well as humans.The Neural Information Process Systems conference held last week in Long Beach, California, sold out its 7500 registration slots in 12 days. NIPS, not long ago just another academic conference, has become a major machine learning job market where newly minted Ph.D.s earn north of $300,000 and top-ranked senior academics command multimillion-dollar, multiyear contracts."

AlphaZero, an offshoot of Google’s Go programs, learned chess given only the rules in just four hours (on 5000 tensor processing units) and easily beats the best human-designed chess programs. Check out this match against Stockfish.

allow="encrypted-media" allowfullscreen="" frameborder="0" gesture="media" height="315" src="https://www.youtube.com/embed/UcAfg9v_dDM" width="560">

Just a trend that machine learning often works better when humans just get out of the way.

The advances in machine learning and automation have a dark side. Earlier this week I attended the CRA Summit on Technology and Jobs, one of a series of meetings organized by Moshe Vardi on how AI and other computing technology will affect the future job market. When we talk about ethics in computer science we usually talk about freedom of information, privacy and fairness but this may be the biggest challenge of them all.

The most stark statistic: Contrary to what certain politicians may tell you, manufacturing output in the United States has never been higher, but manufacturing jobs have declined dramatically due to automation.

The changes have hit hardest for white middle-class less educated males. While this group usually doesn’t get much attention from academics, they have been hit hard, often taking less rewarding jobs or dropping out of the job market entirely. We're seeing many young people living with their parents spending their days playing video games and see a spike in suicides and drug use. Drug overdose is the now the leading cause of death of men under 50.

There are no easy solutions. Universal basic income won’t solve the psychological need a job plays in being a part of something bigger than oneself. In the end we'll need to rethink the educate-work-retire cycle towards more life-long learning and find rewarding jobs that go around automation. This all starts by having a government that recognizes these real challenges.

Interesting Probability on a VERY OLD TV show

Do TV shows overestimate how much a genius can help solve crimes or make really good crystal meth which seems to be blue. YES, see here

Do TV shows get math wrong. YES, see here and about 90% of the episodes of Numb3rs

Closer to home- do TV shows say stupid things about P vs NP. Elementary (one of the two Modern Day Sherlock Holmes shows did) does see here

Did Kirk and Spock really defeat a computer by a trick that wouldn't work now. Yes, see Lance's post on this here

Do TV shows use the word Quantum incorrectly? They do but they are not alone as such, see here

Do people writing Futrama get their math right! Yes- see here

Do people writing 24 get their math wrong! Yes- see here

Does the Big Bang Theory mostly get things right? Yes! - see here

There are more (Seinfeld things comedians should learn proofs! Really- see here) but I can make my point just with the ones above.

ALL of the TV shows except Star Trek were from after 2000 (or so). So, with the exception of Science Fiction, math-refs and sci-refs in TV shows are relatively recent- I had thought.

Which is why I was surprised and delighted to see, an episode of the old western (anti-western? satire of a western?) Maverick, from 1958 (before I was born!), called Rope of Cards a CORRECT and INTERESTING math reference.Maverick bets that a random 25 cards from a deck of cards can be arranged into five 5-card pat hands (I had to look that up-- hands where you don't want to discard any cards, so flush, a straight, a full house would qualify. 4 of a kind would be pat if there were no wild cards). The sucker takes the bet and loses. Maverick later says the odds are high and called the game Maverick Solitaire.*And that is now the name of the puzzle*- see here. The prob is around 0.98.

I call this a mention of math since it has to do with probability- which may be a stretch. And I doubt the scene would encourage people to go into math. But it might encourage one to learn probability either to sucker others or to not be suckered.

So the question now is- are there other non-science-fiction, refs to math in older TV shows?

I suspect yes - similar to the one above which is gambling and probability. What is the earliest mention of math on a TV show? The oldest that did not involve science fiction or gambling?

Tue, 12 Dec 2017 14:33:00 +0000

I have posted about things I see in TV or Movies that are math or CS related:Do TV shows overestimate how much a genius can help solve crimes or make really good crystal meth which seems to be blue. YES, see here

Do TV shows get math wrong. YES, see here and about 90% of the episodes of Numb3rs

Closer to home- do TV shows say stupid things about P vs NP. Elementary (one of the two Modern Day Sherlock Holmes shows did) does see here

Did Kirk and Spock really defeat a computer by a trick that wouldn't work now. Yes, see Lance's post on this here

Do TV shows use the word Quantum incorrectly? They do but they are not alone as such, see here

Do people writing Futrama get their math right! Yes- see here

Do people writing 24 get their math wrong! Yes- see here

Does the Big Bang Theory mostly get things right? Yes! - see here

There are more (Seinfeld things comedians should learn proofs! Really- see here) but I can make my point just with the ones above.

ALL of the TV shows except Star Trek were from after 2000 (or so). So, with the exception of Science Fiction, math-refs and sci-refs in TV shows are relatively recent- I had thought.

Which is why I was surprised and delighted to see, an episode of the old western (anti-western? satire of a western?) Maverick, from 1958 (before I was born!), called Rope of Cards a CORRECT and INTERESTING math reference.Maverick bets that a random 25 cards from a deck of cards can be arranged into five 5-card pat hands (I had to look that up-- hands where you don't want to discard any cards, so flush, a straight, a full house would qualify. 4 of a kind would be pat if there were no wild cards). The sucker takes the bet and loses. Maverick later says the odds are high and called the game Maverick Solitaire.

I call this a mention of math since it has to do with probability- which may be a stretch. And I doubt the scene would encourage people to go into math. But it might encourage one to learn probability either to sucker others or to not be suckered.

So the question now is- are there other non-science-fiction, refs to math in older TV shows?

I suspect yes - similar to the one above which is gambling and probability. What is the earliest mention of math on a TV show? The oldest that did not involve science fiction or gambling?

Razor's Edge

Thu, 07 Dec 2017 15:21:00 +0000

Informally the sensitivity conjecture asks whether every hard Boolean function has a razor's edge input, where flipping a random bit has a reasonable chance of flipping the output.
Let's be more precise. We consider functions f mapping {0,1}^{n} to {0,1}. For every input x, the decision tree complexity at x is the least number of bits of x you would need to query to decide whether the function outputs 0 or 1. The decision tree complexity of a function is the maximum decision tree complexity over all possible x. Most interesting functions have high decision tree complexity, even the lowly OR function requires querying every bit on the input of all zeroes. The decision tree complexity is polynomially-equivalent to randomized-complexity, quantum complexity, certificate complexity, and the degree of a polynomial that computes the function exactly or approximately. The recent paper by Aaronson, Ben-David and Kothari gives a nice chart showing the relationship between these measures and references to the various papers giving the bounds.

The sensitivity of f on an input x is the number of bit locations i such that f(x)≠f(x⊕i) where x⊕i is x with the ith bit flipped. The sensitivity of f is the maximum sensitivity over all inputs. The sensitivity conjecture states that there is some ε>0 such that the sensitivity of f is at least m^{ε} if the decision tree complexity is at least m. If the conjecture were true then for any function with maximal decision tree complexity n (querying every input bit) there must be some razor's edge input x such that flipping a random bit of x has probability at least n^{-ε} of flipping the output.

I find it surprising that we have no proof or counterexample to this purely combinatorial question. There is a generalization of sensitivity known as block sensitivity which is the largest set of disjoint blocks where flipping the bits in any block flips the output bit. Block sensitivity is known to be polynomially related to decision tree complexity.

In a future post I'll talk about some approaches towards resolving this conjecture.

I find it surprising that we have no proof or counterexample to this purely combinatorial question. There is a generalization of sensitivity known as block sensitivity which is the largest set of disjoint blocks where flipping the bits in any block flips the output bit. Block sensitivity is known to be polynomially related to decision tree complexity.

In a future post I'll talk about some approaches towards resolving this conjecture.

Fireside chat with Simons Inst Director Dick Karp

Tue, 05 Dec 2017 04:00:00 +0000

Above link is Samir Khuller interviewing Dick Karp, though its labelled as a fireside chat with Dick Karp.

Very interesting to hear how TCS has evolved. More generally its good to know where you've come from to have a better idea of where you're going.

bill g.

Kolmogorov Complexity and the Primes

A quick primer: Fixed some universal programming language. Let C(x), the Kolmogorov complexity of x, be the length of the smallest program that outputs x. One can show by a simple counting argument for every n there is an x such that C(x) ≥ n. We call such x "random".

Suppose we had a finite list of primes p_{1}…p_{k}. Then any number m can be expressed as p_{1}^{e1}···p_{k}^{ek}. Pick n large, a random x of length n and let m be the number x expresses in binary. We can compute m from e_{1},…,e_{k} and a constant amount of other information, remembering that k is a constant. Each e_{i} is at most log m and so we can describe all of them in O(log log m) bits and thus C(m) = O(log log m). But roughly C(m) = C(x) ≥ n = log m, a contradiction.

But we can do better. Again pick n large, a random x of length n and let m be the number x expresses in binary. Let p_{i} be the largest prime that divides m where p_{i} is the ith prime. We can describe m by p_{i} and m/p_{i}, or by i and m/p_{i}. So we have C(m) ≤ C(i,m/p_{i}) ≤ C(i) + C(m/p_{i}) + 2 log C(p_{i}) ≤ log i + log m/p_{i} + 2 log log i + c. The 2 log C(p_{i}) term is needed to specify the separation between the program for i and the program for m/p_{i}.

Since C(m) ≥ log m, we have

log m ≤ log i + log (m/p_{i})+ 2 log log i + c

log m ≤ log i + log m - log p_{i} + 2 log log i + c

log p_{i} ≤ log i + 2 log log i + c

p_{i} ≤ O(i (log i)^{2})

The prime number theorem has p_{i} approximately i log i, so we get just a log factor off from optimal with simple Kolmogorov complexity.

I wrote a short introduction to Kolmogorov complexity with this proof. I originally got the proof from the great text on Kolmogorov complexity from Li and Vitányi and they give credit to Piotr Berman and John Tromp.

Thu, 30 Nov 2017 12:35:00 +0000

Bill's post on how to derive the non-finiteness of the primes from Van der Waerden's theorem reminds me of a nice proof using Kolmogorov complexity.A quick primer: Fixed some universal programming language. Let C(x), the Kolmogorov complexity of x, be the length of the smallest program that outputs x. One can show by a simple counting argument for every n there is an x such that C(x) ≥ n. We call such x "random".

Suppose we had a finite list of primes p

But we can do better. Again pick n large, a random x of length n and let m be the number x expresses in binary. Let p

Since C(m) ≥ log m, we have

log m ≤ log i + log (m/p

log m ≤ log i + log m - log p

log p

p

The prime number theorem has p

I wrote a short introduction to Kolmogorov complexity with this proof. I originally got the proof from the great text on Kolmogorov complexity from Li and Vitányi and they give credit to Piotr Berman and John Tromp.

Van der Waerden's theorem implies the infinitude of the primes

(Sam Buss and Denis Hirschfeld helped me on this post.)

I was reading the table of contents of the American Math Monthly and saw an article by Levent Alpoge entitled

Van der Waerden and the primes

in which he showed from VDW's theorem that the set of primes is infinite. The article is here and here. My writeup of it is here. Prof K saw me reading the paper.

K: I see you are interested in proving the set of primes is infinite from VDW's theorem.

BILL: Yes, who wouldn't be!!!!

K: Well, lots of people. Including me. Can't you just state VDW's theorem and then give the normal proof? Would that count? Besides, we already have an easy proof that the set of primes is infinite without using VDW's theorem.

I turn K's comments into a valid question: What does it mean to*prove A from B *if A is already known?

There are two issues here, informal and formal.

Informally: If you look at the proof of*VDW-->primes infinite* the steps in that proof look easier than than the usual proof that the set of primes is infinite. And the proof is certainly different. If you read the paper you will see that I am certainly not smuggling in the usual proof. Also, the proof truly does use VDW's theorem.

Formally one could (and people working in Reverse Mathematics do similar things- see the books Subsystems of Second order Arithmetic by Simpson,, and Slicing the Truth, reviewed here) devise a weak axiom system that itself cannot prove*the set of Primes is Infinite*, but can prove the implication *VDW-->Primes infinite*. Note that Reverse Mathematics does this sort of thing, but for proofs involving infinite objects, nothing like what I am proposing here.

Open Problem 1: Find a proof system where the implication VDW-->Primes infinte can be proven, but primes infinite cannot. Sam Buss pointed out to me that for the weak system IΔ_{0} it is not known if it can prove the primes are infinite.

Open Problem 2: Find a proof system where you can do both proofs, but the prove of the implication is much shorter. Perhaps look at (VDW--> there are at least n primes) and (there are at least n primes)

and look at the length of proof as a function of n.

Open Problem 3: The statement*there are no primes with n bits, the with leading bit 1 *can be expressed as a propositional statement. Get lower bounds on its refuation in (say) resolution. (A commenter pointed out an error in a prior version of this one so be wary- there may be an error here as well.)

I am suggesting work on the reverse mathematics of systems much weaker than RCA_{0}. I do not know if this is a paper, a PhD thesis, a career, a dead end, or already pretty much done but I am not aware of it.

Mon, 27 Nov 2017 18:18:00 +0000

(Sam Buss and Denis Hirschfeld helped me on this post.)

I was reading the table of contents of the American Math Monthly and saw an article by Levent Alpoge entitled

Van der Waerden and the primes

in which he showed from VDW's theorem that the set of primes is infinite. The article is here and here. My writeup of it is here. Prof K saw me reading the paper.

K: I see you are interested in proving the set of primes is infinite from VDW's theorem.

BILL: Yes, who wouldn't be!!!!

K: Well, lots of people. Including me. Can't you just state VDW's theorem and then give the normal proof? Would that count? Besides, we already have an easy proof that the set of primes is infinite without using VDW's theorem.

I turn K's comments into a valid question: What does it mean to

There are two issues here, informal and formal.

Informally: If you look at the proof of

Formally one could (and people working in Reverse Mathematics do similar things- see the books Subsystems of Second order Arithmetic by Simpson,, and Slicing the Truth, reviewed here) devise a weak axiom system that itself cannot prove

Open Problem 1: Find a proof system where the implication VDW-->Primes infinte can be proven, but primes infinite cannot. Sam Buss pointed out to me that for the weak system IΔ

Open Problem 2: Find a proof system where you can do both proofs, but the prove of the implication is much shorter. Perhaps look at (VDW--> there are at least n primes) and (there are at least n primes)

and look at the length of proof as a function of n.

Open Problem 3: The statement

I am suggesting work on the reverse mathematics of systems much weaker than RCA

The Grad Student Tax

Mon, 20 Nov 2017 13:15:00 +0000

By now as you've read from Luca or Scott or PhD Comics or a variety of other sources on the dangerous changes to the tax code that passed the US House of Representatives last week. Among a number of university unfriendly policies, the tax code will eliminate the tax exemption for graduate student tuition for students supported with teaching or research duties, nearly every PhD student in STEM fields. The CRA, ACM, IEEE, AAAI, SIAM and Usenix put out a joint statement opposing this tax increase on graduate students. This is real.
Without other changes, a tax on tuition will make grad school unaffordable to most doctoral students. In computer science where potential PhD students can typically get lucrative jobs in industry, we'll certainly see a precipitous drop in those who choose to continue their studies. Universities will have to adjust by lower tuition, if finances and state law allows, and raising stipends. US government science funding will at best remain flat so in almost any scenario we'll see far fewer students pursue PhD degrees particularly in CS and STEM fields. Keep in mind we already don't come close to producing enough CS PhD students entering academia to meet the dramatically growing demand and these moves could frustrate faculty who also might head off to industry.

The current senate proposal leaves the exemption in place though no one can predict what will happen the the two bills get reconciled. In the best case scenario this bill goes the same way as the failed health care reform but republicans seem desperate to pass something major this fall. So reach out to your representatives, especially your senators, and express the need to leave in the exemption.

A Tale of Three Rankings

US News and World Report also has a new global ranking of CS departments. The US doesn't fare that well on the list and the ranking of the US programs on the global list are wildly inconsistent with the US list. What's going on?

75% of the global ranking is based on statistics from Web of Science. Web of Science captures mainly journal articles where conferences in computer science typically have a higher reputation and more selectivity. In many European and Asian universities hiring and promotion often depend heavily on publications and citations in Web of Science encouraging their professor to publish in journals thus leading to higher ranked international departments.

The CRA rightly put out a statement urging the CS community to ignore the global rankings, though I wished they made a distinction between the two different US News rankings.

I've never been a fan of using metrics to rank CS departments but there is a relatively new site, Emery Berger's Computer Science Rankings, based on the number of publications in major venues. CS Rankings passes the smell test for both their US and global lists and is relatively consistent with the US News reputation-based CS graduate rankings.

Nevertheless I hope CS Rankings will not become the main ranking system for CS departments. Departments who wish to raise their ranking would hire faculty based mainly on their ability to publish large number of papers in major conferences. Professors and students would then focus on quantity of papers and this would in the long run discourage risk-taking long-range research, as well as innovations in improving diversity or educating graduate students.

As Goodhart's Law states, "when a measure becomes a target, it ceases to be a good measure". Paradoxically CS Rankings can lead to good rankings of CS departments as long as we don't treat it as such.

Thu, 16 Nov 2017 13:42:00 +0000

In the Spring of 2018 the US News and World Report should release their latest rankings of US graduate science programs including computer science. These are the most cited of the deluge of computer science rankings we see out there. The US News rankings have a long history and since they are reputation based they roughly correspond to how we see CS departments though some argue that reputation changes slowly with the quality of a department.US News and World Report also has a new global ranking of CS departments. The US doesn't fare that well on the list and the ranking of the US programs on the global list are wildly inconsistent with the US list. What's going on?

75% of the global ranking is based on statistics from Web of Science. Web of Science captures mainly journal articles where conferences in computer science typically have a higher reputation and more selectivity. In many European and Asian universities hiring and promotion often depend heavily on publications and citations in Web of Science encouraging their professor to publish in journals thus leading to higher ranked international departments.

The CRA rightly put out a statement urging the CS community to ignore the global rankings, though I wished they made a distinction between the two different US News rankings.

I've never been a fan of using metrics to rank CS departments but there is a relatively new site, Emery Berger's Computer Science Rankings, based on the number of publications in major venues. CS Rankings passes the smell test for both their US and global lists and is relatively consistent with the US News reputation-based CS graduate rankings.

Nevertheless I hope CS Rankings will not become the main ranking system for CS departments. Departments who wish to raise their ranking would hire faculty based mainly on their ability to publish large number of papers in major conferences. Professors and students would then focus on quantity of papers and this would in the long run discourage risk-taking long-range research, as well as innovations in improving diversity or educating graduate students.

As Goodhart's Law states, "when a measure becomes a target, it ceases to be a good measure". Paradoxically CS Rankings can lead to good rankings of CS departments as long as we don't treat it as such.

Can you measure which pangrams are natural

The classic is:

The quick brown fox jumps over the lazy dog.

(NOTE- I had `jumped' but a reader pointed out that there was no s, and that `jumps' is the correct word)

which is only 31 letters.

I could give a pointer to lists of such, but you can do that yourself.

My concern is:

a) are there any pangrams that have actually been uttered NOT in the context of `here is a pangram'

b) are there any that really could.

That is- which pangrams are natural? I know this is an ill defined question.

Here are some candidates for natural pangrams

1) Pack my box with five dozen liquor jugs

2) Amazingly few discotheques provide jukeboxes

3) Watch Jeopardy! Alex Trebek's fun TV quiz game

4) Cwm fjord bank glyphs vext quiz

(Okay, maybe that one is not natural as it uses archaic words. It means

``Carved symbols in a mountain hollow on the bank of an inlet irritated an

eccentric person' Could come up in real life. NOT. It uses every letter

exactly once.)

How can you measure how natural they are?

For the Jeopardy one I've shown it to people and asked them

``What is unusual about this new slogan for the show Jeopardy?''

and nobody gets it. more important- they believe it is the new slogan.

So I leave to the reader:

I) Are the other NATURAL pangrams?

II) How would you test naturalness of such?

Pinning down `natural' is hard. I did a guest post in 2004 before I was an official co-blogger, about when a problem (a set for us) is natural, for example the set all regular expressions with squaring (see here).

Mon, 13 Nov 2017 14:48:00 +0000

A Pangram is a sentence that contains every letter of the alphabetThe classic is:

The quick brown fox jumps over the lazy dog.

(NOTE- I had `jumped' but a reader pointed out that there was no s, and that `jumps' is the correct word)

which is only 31 letters.

I could give a pointer to lists of such, but you can do that yourself.

My concern is:

a) are there any pangrams that have actually been uttered NOT in the context of `here is a pangram'

b) are there any that really could.

That is- which pangrams are natural? I know this is an ill defined question.

Here are some candidates for natural pangrams

1) Pack my box with five dozen liquor jugs

2) Amazingly few discotheques provide jukeboxes

3) Watch Jeopardy! Alex Trebek's fun TV quiz game

4) Cwm fjord bank glyphs vext quiz

(Okay, maybe that one is not natural as it uses archaic words. It means

``Carved symbols in a mountain hollow on the bank of an inlet irritated an

eccentric person' Could come up in real life. NOT. It uses every letter

exactly once.)

How can you measure how natural they are?

For the Jeopardy one I've shown it to people and asked them

``What is unusual about this new slogan for the show Jeopardy?''

and nobody gets it. more important- they believe it is the new slogan.

So I leave to the reader:

I) Are the other NATURAL pangrams?

II) How would you test naturalness of such?

Pinning down `natural' is hard. I did a guest post in 2004 before I was an official co-blogger, about when a problem (a set for us) is natural, for example the set all regular expressions with squaring (see here).

Advice for the Advisor

Of course I don't and can't hand out a physical proceedings to a student to skim. Instead I point to on-line proceedings but browsing just doesn't have the same feel.

Looking back I would add some additional advice. Push your students and encourage them to take risks with their research. If they aren't failing to solve their problems, they need to try harder problems. We too often define success by having your paper accepted into a conference. Better to have an impact on what others do.

Finally remember that advising doesn't stop at the defense. It is very much a parent-child relationship that continues long after graduation. Your legacy as a researcher will eventually come to an end. Your legacy as an advisor will live on through those you advise and their students and so on to eternity.

Thu, 09 Nov 2017 19:08:00 +0000

A soon-to-be professor asked me recently if I could share some ideas on on how to advise students. I started to write some notes only to realize that I had already posted on the topic in 2006.

Have students work on problems that interest them not just you. I like to hand them a proceedings of a recent conference and have them skim abstracts to find papers they enjoy. However if they stray too far from your research interests, you will have a hard time pushing them in the right directions. And don't work on their problems unless they want you to.Computer science evolves dramatically but the basic principles of advising don't. This advise pretty much works now as well as it did in 2006, in the 80's when I was in the student or even the 18th century. Good advising never goes out of style.

Keep your students motivated. Meet with them on a regular basis. Encourage students to discuss their problems and other research questions with other students and faculty. Do your best to keep their spirits high if they have trouble proving theorems or are not getting their papers into conferences. Once they lose interest in theory they won't succeed.

Feel free to have them read papers, do some refereeing and reviewing, give talks on recent great papers. These are good skills for them to learn. But don't abuse them too much.

Make sure they learn that selling their research is as important as proving the theorems. Have them write the papers and make them rewrite until the paper properly motivates the work. Make them give practice talks before conferences and do not hold back on the criticism.

Some students will want to talk about some personal issues they have. Listen as a friend and give some suggestions without being condescending. But if they have a serious emotional crisis, you are not trained for that; point them to your university counseling services.

Once it becomes clear a student won't succeed working with you, or won't succeed as a theorist or won't succeed in graduate work, cut them loose. The hardest thing to do as an advisor is to tell a student, particular one that tries hard, that they should go do something else. It's much easier to just keep them on until they get frustrated and quit, but you do no one any favors that way.

Of course I don't and can't hand out a physical proceedings to a student to skim. Instead I point to on-line proceedings but browsing just doesn't have the same feel.

Looking back I would add some additional advice. Push your students and encourage them to take risks with their research. If they aren't failing to solve their problems, they need to try harder problems. We too often define success by having your paper accepted into a conference. Better to have an impact on what others do.

Finally remember that advising doesn't stop at the defense. It is very much a parent-child relationship that continues long after graduation. Your legacy as a researcher will eventually come to an end. Your legacy as an advisor will live on through those you advise and their students and so on to eternity.

The two fears about technology- one correct, one incorrect

When the luddites smashed loom machines their supporters (including Lord Byron, Ada Lovelaces father) made two arguments in favor of the luddites (I am sure I am simplifying what they said):

When Deep Blue beat Kasparov in chess there were some articles about how this could be the end of mankind. That's just stupid. For a more modern article on some of the dangers of AI (some reasonable some not) see this article on watson.

It seems to me that AI can do some WELL DEFINED (e.g., chess) very well, and even some not-quite-so-well-defined things (Nat Lang translation) very well, but the notion that they will evolve to be `really intelligent' (not sure that is well defined) and think they are better than us and destroy us seems like bad science fiction (or good science fiction).

Watson can answer questions very very well, Medical diagnosis machines may well be much better than doctors. While this may be bad news for Ken Jennings and for doctors, I don't see it being bad for humanity in the long term. Will we one day look at the fears of AI and see that they were silly--- the machines did not, terminator-style, turn against us? I think so. And of course I hope so.

Mon, 06 Nov 2017 16:44:00 +0000

When the luddites smashed loom machines their supporters (including Lord Byron, Ada Lovelaces father) made two arguments in favor of the luddites (I am sure I am simplifying what they said):

- These machines are tossing people out of work NOW and this is BAD for THOSE people. In this assertion they were clearly correct. (`lets just retrain them' only goes so far).
- This is bad for mankind! Machines displacing people will lead to the collapse of civilization! Mankind will be far worse off because of technology. In this assertion I think they were incorrect. That is, I think civilization is better off now because of technology. (If you disagree leave an intelligent polite comment. Realize that just be leaving a comment you are using technology. That is NOT a counterargument. I don't think its even IRONY. Not sure what it is.)
- (This third one is mine and its more of a question) If you take the human element out of things then bad things will happen. There was a TV show where a drone was going to be dropped on something but a HUMAN noticed there were red flowers on the car and deduced it was a wedding so it wasn't dropped. Yeah! But I can equally well see the opposite: a computer program notices things that indicate its not the target that a person would have missed. But of course that would make as interesting a story. More to the point- if we allow on computers to make decisions without the human elemnet, is that good or bad? For grad admissions does it get rid of bias or does it reinforce bias? (See the book Weapons of Math Destruction for an intelligent argument against using programs for, say, grad admissions and other far more important things.)

When Deep Blue beat Kasparov in chess there were some articles about how this could be the end of mankind. That's just stupid. For a more modern article on some of the dangers of AI (some reasonable some not) see this article on watson.

It seems to me that AI can do some WELL DEFINED (e.g., chess) very well, and even some not-quite-so-well-defined things (Nat Lang translation) very well, but the notion that they will evolve to be `really intelligent' (not sure that is well defined) and think they are better than us and destroy us seems like bad science fiction (or good science fiction).

Watson can answer questions very very well, Medical diagnosis machines may well be much better than doctors. While this may be bad news for Ken Jennings and for doctors, I don't see it being bad for humanity in the long term. Will we one day look at the fears of AI and see that they were silly--- the machines did not, terminator-style, turn against us? I think so. And of course I hope so.

Matching and Complexity

For bipartite graphs (consider only friendships between men and women), we have had fast matching algorithms since the 1950's via augmenting paths. In the 1965 classic paper, Path, Trees and Flowers, Jack Edmonds gives a polynomial-time algorithm for matching on general graphs. This paper also laid out an argument for polynomial-time as efficient computation that would lead to the complexity class P (of P v NP fame).

After Razborov showed that the clique problem didn't have polynomial-size monotone circuits, his proof techniques also showed that matching didn't have polynomial-size monotone circuits and Raz and Wigderson show that matching requires exponential size and linear depth. Because of Edmond's algorithm matching does have polynomial-size circuits in general. NOTs are very powerful.

Can one solve matching in parallel, say the class NC (Nick's class after Pippenger) of problems computable by a polynomial number of processors in polylogarithmic time? Karp, Upfal and Wigderson give a randomized NC algorithm for matching. Mulmuley, Vazirani and Vazirani prove an isolation lemma that allows a randomized reduction of matching to the determinant. Howard Karloff exhibited a Las Vegas parallel algorithm, i.e., never makes a mistake and runs in expected polylogarithmic time.

Can one remove the randomness? An NC algorithm for matching remains elusive but this year brought two nice results in that direction. Ola Svensson and Jakub Tarnawski give a quasi-NC algorithm for general graph matching. Quasi-NC means a quasipolynomial (2^{polylog}) number of processors. Nima Anari and Vijay Vazirani give an NC algorithm for matching on planar graphs.

Matching is up there with primality, factoring, connectivity, graph isomorphism, satsfiability and the permanent as fixed algorithm problems that have played such a large role in helping us understand complexity. Thanks matching problem and may you find NC nirvana in the near future.

Thu, 02 Nov 2017 14:44:00 +0000

Given a group of people, can you pair them up so that each pair are Facebook friends with each other? This is the famous perfect matching problem. The complexity of matching has a rich history which got a little richer in the past few months.For bipartite graphs (consider only friendships between men and women), we have had fast matching algorithms since the 1950's via augmenting paths. In the 1965 classic paper, Path, Trees and Flowers, Jack Edmonds gives a polynomial-time algorithm for matching on general graphs. This paper also laid out an argument for polynomial-time as efficient computation that would lead to the complexity class P (of P v NP fame).

After Razborov showed that the clique problem didn't have polynomial-size monotone circuits, his proof techniques also showed that matching didn't have polynomial-size monotone circuits and Raz and Wigderson show that matching requires exponential size and linear depth. Because of Edmond's algorithm matching does have polynomial-size circuits in general. NOTs are very powerful.

Can one solve matching in parallel, say the class NC (Nick's class after Pippenger) of problems computable by a polynomial number of processors in polylogarithmic time? Karp, Upfal and Wigderson give a randomized NC algorithm for matching. Mulmuley, Vazirani and Vazirani prove an isolation lemma that allows a randomized reduction of matching to the determinant. Howard Karloff exhibited a Las Vegas parallel algorithm, i.e., never makes a mistake and runs in expected polylogarithmic time.

Can one remove the randomness? An NC algorithm for matching remains elusive but this year brought two nice results in that direction. Ola Svensson and Jakub Tarnawski give a quasi-NC algorithm for general graph matching. Quasi-NC means a quasipolynomial (2

Matching is up there with primality, factoring, connectivity, graph isomorphism, satsfiability and the permanent as fixed algorithm problems that have played such a large role in helping us understand complexity. Thanks matching problem and may you find NC nirvana in the near future.

The k=1 case is FUN, the k=2 case is fun, the k\ge 3 case is... you decide.

The following problem can be given as a FUN recreational problem to HS students or even younger: (I am sure that many of you already know it but my point is how to present it to HS students and perhaps even younger.)

Alice will say all but ONE of the elements of {1,...,10^{10}}in some order.

Bob listens with the goal of figuring out the number. Bob cannot possibly store 10^{10} numbers in his head. Help Bob out by giving him an algorithm which will not make his head explode.

This is an easy and fun puzzle. The answer is in the writeup I point to above.

The following variant is a bit harder but a bright HS student could get it: Same problem except that Alice leaves out TWO numbers.

The following variant is prob more appropriate for a HS math competition than for a FUN gathering of HS students: Same problem except that Alice leaves out THREE numbers.

The following variant may be easier because its harder: Alice leaves out k numbers, k a constant. Might be easier then the k=3 case since the solver knows to NOT use properties of 3.

I find it interesting that the k=1, k=2, and k≥ 3 cases are on different levels of hardness. I would like a more HS answer to the k≥ 3 case.

Tue, 31 Oct 2017 12:17:00 +0000

(All of the math in this post is in here.)The following problem can be given as a FUN recreational problem to HS students or even younger: (I am sure that many of you already know it but my point is how to present it to HS students and perhaps even younger.)

Alice will say all but ONE of the elements of {1,...,10

Bob listens with the goal of figuring out the number. Bob cannot possibly store 10

This is an easy and fun puzzle. The answer is in the writeup I point to above.

The following variant is a bit harder but a bright HS student could get it: Same problem except that Alice leaves out TWO numbers.

The following variant is prob more appropriate for a HS math competition than for a FUN gathering of HS students: Same problem except that Alice leaves out THREE numbers.

The following variant may be easier because its harder: Alice leaves out k numbers, k a constant. Might be easier then the k=3 case since the solver knows to NOT use properties of 3.

I find it interesting that the k=1, k=2, and k≥ 3 cases are on different levels of hardness. I would like a more HS answer to the k≥ 3 case.

2017 Fall Jobs Post

For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. AcademicKeys also lists a number of CS jobs. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.

It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up. Think global--there are growing theory groups around the world and in particular many have postdoc positions to offer.

The computer science job market remains hot with most CS departments trying to hire multiple faculty. Many realize the importance of having a strong theory group, but it doesn't hurt if you can tie your research to priority areas like big data, machine learning and security.

Remember in your research statement, your talk and your interview you need to sell yourself as a computer scientist, not just a theorist. Show interest in other research areas and, especially in your 1-on-1 meetings, find potential ways to collaborate. Make the faculty in the department want you as a colleagues not just someone hiding out proving theorems.

Good luck to all on the market and can't wait for our Spring 2017 jobs post to see where you all end up.

Thu, 26 Oct 2017 15:04:00 +0000

You're finishing up grad school or a postdoc and ask yourself what should I do for the rest of my life? We can't answer that for you but we can help you figure out your options in the annual fall jobs post. We focus mostly on the academic jobs. You could work in industry but there's nothing like choosing your own research directions and working directly with students and taking pride in their success.For computer science faculty positions best to look at the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. AcademicKeys also lists a number of CS jobs. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post.

It never hurts to check out the webpages of departments or to contact people to see if positions are available. Even if theory is not listed as a specific hiring priority you may want to apply anyway since some departments may hire theorists when other opportunities to hire dry up. Think global--there are growing theory groups around the world and in particular many have postdoc positions to offer.

The computer science job market remains hot with most CS departments trying to hire multiple faculty. Many realize the importance of having a strong theory group, but it doesn't hurt if you can tie your research to priority areas like big data, machine learning and security.

Remember in your research statement, your talk and your interview you need to sell yourself as a computer scientist, not just a theorist. Show interest in other research areas and, especially in your 1-on-1 meetings, find potential ways to collaborate. Make the faculty in the department want you as a colleagues not just someone hiding out proving theorems.

Good luck to all on the market and can't wait for our Spring 2017 jobs post to see where you all end up.

Open: PROVE the pumping and reductions can't prove every non-reg lang non-reg.

Mon, 23 Oct 2017 15:56:00 +0000

Whenever I post on regular langs, whatever aspect I am looking at, I get a comment telling me that we should stop proving the pumping lemma (and often ask me to stop talking about it) and have our students prove things not regular by either the myhill-nerode theorem or by kolm complexity. I agree with these thoughts pedagogically but I am curious: Is there a non-reg lang L such that you CANNOT prove L non-reg via pumping and reductions? There are many pumping theorems (one of which is iff so you could use it on all non-reg but you wouldn't want to-- its in the paper pointed to later). I'll pick the most powerful Pumping Lemma that I can imagine teaching a class of ugrads: If L is regular then there exists n0 such that for all w∈ L, |w| ≥ n0 and all prefixes x' of w with |w|-|x'| ≥ n0 there exists x,y,z such that |x| ≤ n0 y is nonempty w=x'xyz for all i ≥ 0 x'xyiz ∈ L If this is all we could use then the question is silly: just take { w : number of a's NOT EQUAL number of b's } which is not regular but satisfies the pumping lemma above. SO I also allow closure properties. I define (and this differs from my last post--- I thank my readers, some of whom emailed me, for help in clarifying the question) A ≤ B if there exists a function f such that if f(A) = B then A regular implies B regular (e.g., f(A) = A ∩ a*b* ) (CORRECTION: Should be B Regular --> A regular. Paul Beame pointed this out in the comments.) (CORRECTION- My definition does not work. I need something like what one of the commenters suggested and what I had in a prior blog post. Let CL be a closure function if for all A, if A is regular than CL(A) is regular. Like f(A) = A cap a*b*. Want a^nb^n \le numb a's = numb b's via f(A) = A cap a*b*. So want A \le B if there is a closure function f with f(B) = A. ) A set B is Easily proven non-reg if either a) B does not satisfy the pumping lemma, or b) there exists a set A that does not satisfy the pumping lemma such that A ≤ B. OPEN QUESTION (at least open for me, I am hoping someone out there knows the answer) Is there a language that is not regular but NOT easily proven non-reg? Ehrenfeucht, Parikh, Rozenberg in a paper Pumping Lemmas for Regular Sets (I could not find the official version but I found the Tech Report on line: here. Ask your grandparents what a Tech report is. Or see this post: here) from Lance about Tech Reports) proved an iff pumping lemma. They gave as their motivating example an uncountable number of languages that could not be proved non-regular even with a rather fancy pumping lemma. But there lang CAN be easily proven non-reg. I describe that here. (This is the same paper that proves and iff Pumping Lemma. It uses Ramsey Theory so I should like it. Oh well.) SO, I looked around for candidates for non-reg languages that could not be easily proven non-regular. The following were candidates but I unfortunately(?) found ways to prove them non-regular using PL and Closure (I found the ways by asking some bright undergraduates, to give credit- Aaron George did these.) { aibj : i and j are relatively prime } {xxRw : x,w nonempty } where R is Reverse. I leave it to the reader to prove these are easily proven non-regular. To re-iterate my original question: Find a non-reg lang that is not easily proven non-reg. Side Question- my definition of reduction se[...]
The Amazon Gold Rush

Unless you have hidden under a rock, you've heard that Amazon wants to build a second headquarters in or near a large North American city. Amazon put out a nice old fashioned RFP.

I've seen companies put subsidiaries in other cities, or moved their headquarters away from their manufacturing center, like when Boeing moved to Chicago. But building a second headquarters, "a full equal" to their Seattle campus, seems unprecedented for a company this size. Much like a company has only one CEO or colleges have one President, having two HQs questions where decisions get made. Amazon is not a typical company and maybe location means less these days.

Atlanta makes many short lists. We've got a burgeoning tech community, a growing city, sites with a direct train into the world's busiest airport, good weather, low cost of living and, of course, great universities. Check out the Techlanta and ChooseATL.

So am I using Amazon's announcement as an excuse to show off Atlanta? Maybe. But winning the Amazon HQ2 would be transformative to the city, not only in the jobs it would bring, but in immediately branding Atlanta as a new tech hub. Atlanta will continue to grow whether or not Amazon comes here but high profile wins never hurt.

Many other cities make their own claims on Amazon and I have no good way to judge this horse race (where's the prediction market?). Impossible to tell how Amazon weighs their criteria and it may come to which city offers the best incentives. Reminds me of the Simons Institute Competition announced in 2010 (Berkeley won) though with far larger consequences.

Thu, 19 Oct 2017 13:15:00 +0000

Unless you have hidden under a rock, you've heard that Amazon wants to build a second headquarters in or near a large North American city. Amazon put out a nice old fashioned RFP.

Please provide an electronic copy and five (5) hard copies of your responses by October 19, 2017 to amazonhq2@amazon.com. Please send hard copies marked “confidential” between the dates of October 16th – 19th to ...Hard copies? Just like the conference submissions of old. Key considerations for Amazon: A good site, local incentives, highly education labor pool and strong university system, near major highways and airports, cultural community fit and quality of life.

I've seen companies put subsidiaries in other cities, or moved their headquarters away from their manufacturing center, like when Boeing moved to Chicago. But building a second headquarters, "a full equal" to their Seattle campus, seems unprecedented for a company this size. Much like a company has only one CEO or colleges have one President, having two HQs questions where decisions get made. Amazon is not a typical company and maybe location means less these days.

Atlanta makes many short lists. We've got a burgeoning tech community, a growing city, sites with a direct train into the world's busiest airport, good weather, low cost of living and, of course, great universities. Check out the Techlanta and ChooseATL.

So am I using Amazon's announcement as an excuse to show off Atlanta? Maybe. But winning the Amazon HQ2 would be transformative to the city, not only in the jobs it would bring, but in immediately branding Atlanta as a new tech hub. Atlanta will continue to grow whether or not Amazon comes here but high profile wins never hurt.

Many other cities make their own claims on Amazon and I have no good way to judge this horse race (where's the prediction market?). Impossible to tell how Amazon weighs their criteria and it may come to which city offers the best incentives. Reminds me of the Simons Institute Competition announced in 2010 (Berkeley won) though with far larger consequences.

Reductions between formal languages

Let EQ = {w : number of a's = number of b's }

Let EQO = { a^{n}b^{n} : n ∈ N} (so its Equal and in Order)

Typically we do the following:

Prove EQO is not regular by the pumping lemma.

Then to show EQ is not regular you say: If EQ was regular than EQ INTERSECT a*b*= EQO is regular, hence EQ is not regular (I know you can also show EQ with the Pumping Lemma but thats not important now.)

One can view this as a reduction:

A ≤ B

If one can take B, apply a finite sequence of closure operations (e.g., intersect with a regular lang,

complement, replace all a with aba, replace all a with e (empty string), ) and get A.

If A is not regular and A≤ B then B is not regular.

Note that

EQO ≤ EQ ≤ EQ

Since EQO is not regular (by pumping ) we get EQ and \overline{EQ} are not regular.

Hence we could view the theory of showing things not-reg like the theory of NP completeness

with reductions and such. However, I have never seen a chain of more than length 2.

BUT- consider the following! Instead of using Pumping Lemma we use Comm. Comp. I have

been able to show (and this was well known) that

EQ is not regular by using Comm. Comp:

EQH = { (x,y) : |x|=|y| and number of a's in xy = number of b's in xy }

Comm Complexity of EQH is known to be log n + \Theta(1). Important- NOT O(1).

If EQ is regular then Alice and Bob have an O(1) protocol: Alice runs x through the DFA and

transmits to Bob the state, then Bob runs y from that state to the end and transmits 1 if ended up in an accept state, 0 if not.

But I was not able to show EQO is not regular using Comm Complexity. SO imagine a bizzaro world where I taught my students the Comm Comp approach but not the Pumping Lemma. Could they prove that EQO is not regular. For one thing, could they prove

EQO ≤ EQ ?

Or show that this CANNOT be done.

Anyone know?

One could also study the structure of the degrees induced by the equiv classes.

If this has already been done, let me know in the comments.

Mon, 16 Oct 2017 15:59:00 +0000

Let EQ = {w : number of a's = number of b's }

Let EQO = { a

Typically we do the following:

Prove EQO is not regular by the pumping lemma.

Then to show EQ is not regular you say: If EQ was regular than EQ INTERSECT a*b*= EQO is regular, hence EQ is not regular (I know you can also show EQ with the Pumping Lemma but thats not important now.)

One can view this as a reduction:

A ≤ B

If one can take B, apply a finite sequence of closure operations (e.g., intersect with a regular lang,

complement, replace all a with aba, replace all a with e (empty string), ) and get A.

If A is not regular and A≤ B then B is not regular.

Note that

EQO ≤ EQ ≤ EQ

Since EQO is not regular (by pumping ) we get EQ and \overline{EQ} are not regular.

Hence we could view the theory of showing things not-reg like the theory of NP completeness

with reductions and such. However, I have never seen a chain of more than length 2.

BUT- consider the following! Instead of using Pumping Lemma we use Comm. Comp. I have

been able to show (and this was well known) that

EQ is not regular by using Comm. Comp:

EQH = { (x,y) : |x|=|y| and number of a's in xy = number of b's in xy }

Comm Complexity of EQH is known to be log n + \Theta(1). Important- NOT O(1).

If EQ is regular then Alice and Bob have an O(1) protocol: Alice runs x through the DFA and

transmits to Bob the state, then Bob runs y from that state to the end and transmits 1 if ended up in an accept state, 0 if not.

But I was not able to show EQO is not regular using Comm Complexity. SO imagine a bizzaro world where I taught my students the Comm Comp approach but not the Pumping Lemma. Could they prove that EQO is not regular. For one thing, could they prove

EQO ≤ EQ ?

Or show that this CANNOT be done.

Anyone know?

One could also study the structure of the degrees induced by the equiv classes.

If this has already been done, let me know in the comments.