Added By: Feedage Forager | ![]() |
Language: English | |
Tags: | |
authors blind review blind don double blind double learning might much paper papers people review theory work | |
Rate this Feed![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Comments (0) |
Feed Details and Statistics ![]() |
Updated: 2018-04-20T03:38:01.276-06:00
2018-02-14T07:00:37.576-07:00
This is a guest post by a colleague in the TCS community, a person I know. If you read other TCS blogs you might come across this there. This is by design. Please do read it. Every #MeToo story over the last several months has made me pause. My heart races and my concentration fails. The fact that the stories have largely focused on the workplace adds to my difficulty.Do I speak out too?I have shared a few stories with colleagues about things that have happened to me in school and at work. But these stories have been somewhat lighthearted events that have been easy to share without outing the perpetrators.For example, I have told a story about a university employee telling me, in so many words, that I should be barefoot and pregnant and not in the office. What I didn't share is that the same employee, later that year -- despite the fact that our common boss knew about this story because I did indeed report it -- was awarded a best employee award. How do you think that made me feel? Like my experience didn't matter and that such comments are condoned by our department. Why didn't I share that information widely? Because I was worried that folks would then be able to figure out who the culprit was. And isn't that even worse? Shouldn't it be the sexist who is worried and not the woman who, yet again, is made to feel like she doesn't belong?---Let me tangent a bit. For years I have not flown. Ostensibly I stopped flying because of the contribution to the climate crisis. When I travel, I go by train. It takes longer, but has been surprisingly pleasant. And when travel takes 3-4 times as long, you don't do it as often, further reducing your carbon footprint. Of course, that means that I don't go to conferences unless they are nearby.But when I really think about it, is this really the reason I stopped going to conferences? A conference I would normally go to was held nearby a few years ago and I didn't go. Sure, I suffered a grievous injury two weeks before, but I hadn't even registered. I had planned to not go long before that injury.So, really, why do I no longer attend conferences? Partly I don't feel that I need to anymore, now that I have tenure. When I stopped attending conferences, I was able to "coast into" tenure. Letter writers would remember me. I essentially stopped going to conferences and workshops as soon as I possibly could. ---Back to the beginning, or close to.I was nervous at the first conference I attended as a graduate student. One of the reasons I was nervous was that I was athletic at the time and planned on daily runs while I was attending -- I was worried that it might be viewed as a waste of time. My advisor, who also went to the conference, found out about my athleticism and suggested we run together. This was a relief to me. That is, until we were running and he started talking about his lackluster sex life with his wife. I responded by picking up the pace and feigning an illness on the remaining days. On the last day of the conference we were out for dinner with a large group of people and dinner went late into the night. I excused myself, as I had a 4AM bus to catch. My advisor walked me out of the restaurant and awkwardly said something about wanting me to stay and that we should talk. I stuck to leaving, knowing that I needed some sleep before the long trip home the next day. He said we should talk when we were back in the office. Honestly, at the time I thought he was going to complain about my talk or my professional performance in some way. I worried about it all through the weekend until we met next. I brought it up at the end of our meeting, asking what he wanted to talk about, naively expecting professional criticism. When he said I must surely know, in a certain voice, I knew he wasn't talking about work. I feigned ignorance, and he eventually brushed it off and said not to worry. In the coming months, he would cancel meetings and otherwise make himself unavailable. After a half year I realized I wouldn't be able to be successful without having a supportive advisor and, despite first[...]2018-01-22T09:00:18.345-07:00
My first two posts on double blind review triggered good discussion by Michael Mitzenmacher and Boaz Barak (see the comments on these posts for more). I thought I'd try to synthesize what I took away from the posts and how my own thinking has developed.First up, I think it's gratifying to see that the the basic premise: "single blind review has the potential for bias, especially with respect to institutional status, gender and other signifiers of in/out groups" is granted at this point. There was a time in the not-so-distant past that I wouldn't be able to even establish this baseline in conversations that I'd have.The argument therefore has moved to one of tradeoffs: does the installation of DB review introduce other kinds of harm while mitigating harms due to bias?Here are some of the main arguments that have come up:Author identity carries valuable signal to evaluate the work. This argument manifested itself in comments (and I've heard it made in the past). One specific version of it that James Lee articulates is that all reviewing happens in a resource-limited setting (the resource here being time) and so signals like author identity, while not necessary to evaluate the correctness of a proof, provide a prior that can help focus one's attention. My instinctive reaction to this is "you've just defined bias". But on reflection I think James (and others people who've said this) are pointing out that abandoning author identity is not for free. I think that's a fair point to make. But I'd be hard pressed to see why this increase in effort negates the fairness benefits from double blind review (and I'm in general a little uncomfortable with this purely utilitarian calculus when it comes to bias).As a side note, I think that focusing on paper correctness is a mistake. As Boaz points out, this is not the main issue with most decisions on papers. What matters much more is "interestingness", which is very subjective and much more easily bound up with prior reactions to author identity. Some reviewers may be aware of author identity and others might not. This inconsistency could be a source of error in reviewing.Boaz makes this point in his argument against DB review. It's an interesting argument, but I think it also falls into the trap of absolutism: i.e imperfections in this process will cause catastrophic failure. This point was made far more eloquently in a comment on a blog post about ACL's double blind policy (emphasis mine). I think this kind of all-or-nothing position fails to consider one of the advantages of blind review. Blind review is not only about preventing positive bias when you see a paper from an elite university, it’s also about the opposite: preventing negative bias when you see a paper from someone totally unknown. Being a PhD student from a small group in a little known university, the first time I submitted a paper to an ACL conference I felt quite reassured by knowing that the reviewers wouldn’t know who I was. In other words, under an arXiv-permissive policy like the current one, authors still have the *right* to be reviewed blindly, even if it’s no longer an obligation because they can make their identity known indirectly via arXiv+Twitter and the like. I think that right is important. So the dilemma is not a matter of “either we totally forbid dissemination of the papers before acceptance in order to have pure blind review (by the way, 100% pure blind review doesn’t exist anyway because one often has a hint of whom the authors may be, and this is true especially of well-known authors) or we throw the baby out with the bathwater and dispense with blind review altogether”. I think blind review should be preserved at least as a right for the author (as it is know), and the question is whether it should also be an obligation or not.Prepublication on the arXiv is a desirable goal to foster open access and the speedy dissemination of information. Double blind review is irrevocably in conflict with non-anonyous pre-print dissemination.Th[...]2018-01-09T00:41:39.915-07:00
I've had a number of discussions with people both before and after the report that Rasmus and I wrote on the double-blind experiment at ALENEX. And I think it's helpful to lay out some of my thoughts on both the purpose of double blind review as I understand it, and the logistical challenges of implementing it.What is the purpose of double blind review? The goal is to mitigate the effects of the unconscious, implicit biases that we all possess and that influence our decision making in imperceptible ways. It's not a perfect solution to the problem. But there is now a large body of evidence suggesting thatAll people are susceptible to implicit biases, whether it be regarding institutional status, individual status, or demographic stereotyping. And what's worse that we are incredibly bad at assessing or detecting our own biases. At this point, a claim that a community is not susceptible to bias is the one that needs evidence. Double blind review can mitigate this effect. Probably the most striking example of this is the case of orchestra auditions, where requiring performers to play behind a screen dramatically increased the number of women in orchestras. What is NOT the purpose of double blind review? Double blind review is not a way to prevent anyone from ever figuring out the author identity. So objections to blinding based on scenarios where author identity is partially or wholly revealed are not relevant. Remember, the goal is to eliminate the initial biases that come from the first impressions. What makes DB review hard to implement at theory venues? Theory conferences do two things that are different from other communities. Werequire that PC members do NOT submit papersallow PC members to initiate queries for external subreviewers. These two issues are connected. If you don't allow PC members to submit papers, you need a small PC. If you have a small PC, each PC member is responsible for many papers. If each PC member is responsible for many papers, they need to outsource the effort to be able to get the work done. As we mentioned earlier, it's not possible to have PC members initiate review requests if they don't know who might be in conflict with a paper whose authors are invisible. So what do we do? There's actually a reasonably straightforward answer to this. We construct the PC as usual with the usual restrictions.We construct a list of “reviewers”. For example, "anyone with a SODA/STOC/FOCs paper in the last 5 years” or something like that. Ideally we will solicit nominations from the PC for this purpose.We invite this list of people to be reviewers for SODA, and do this BEFORE paper submissionauthors will declare conflicts with reviewers and domains (and reviewers can also declare conflicts with domains and authors) at bidding time, the reviewers will be invited to bid on (blinded) papers. The system will automatically assign people. PC members will also be in charge of papers as before, and it’s their job to manage the “reviewers” or even supply their own reviews as needed. Any remaining requests for truly external sub reviewing will be handled by the PC chairs. I expect this number will be a lot smaller.Of course all of this is pretty standard at venues that implement double blind review. But what if a sub-area is so small that all the potential reviewers are conflictedwell if that's the case, then it's a problem we face right now. And DB review doesn't really affect it. What about if a paper is on the arXiv? We ask authors and reviewers to adhere to double blind review policies in good faith. Reviewers are not expected to go hunting for the author names, and authors are expected to not draw attention to information that could lead to a reveal. Like with any system, we trust people to do the right thing, and that generally works. But labeling CoI for so many people is overwhelming.It does take a little time, but less time than one expects. Prac[...]2018-01-07T13:54:24.304-07:00
+Rasmus Pagh and I chaired ALENEX 2018, and we decided to experiment with double blind review for the conference. What follows is a report that we wrote on our experiences doing this. There are some useful notes about logistics, especially in the context of a theoretically-organized conference on experimental algorithms.ALENEX 2018 Double Blind ReviewFor ALENEX 2018, we decided to experiment with a double blind review process i.e one in which authors and reviewers were unaware of each others’ identity. While double blind review is now almost standard in most computer science conferences, it is still relatively uncommon in conferences that focus on theoretical computer science and related topics. The motivationIn the original argument we presented to the ALENEX Steering Committee, we presented the following reasons for why we wanted double blind review:1. Eliminating bias. Andrew Tomkins did an experiment for WSDM this year and wrote a report on it: https://arxiv.org/abs/1702.00502One particular observation:"Reviewers in the single-blind condition typically bid for 22% fewer papers, and preferentially bid for papers from top institutions. Once papers were allocated to reviewers, single-blind reviewers were significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors and top institutions. The estimated odds multipliers are 1.66 for famous authors and 1.61 and 2.10 for top universities and companies respectively, so the result is tangible”2. Common practice.Virtually every CS community except theory is doing double blind review, including most of ML (NIPS, ICML), DB (VLDB, SIGMOD), Systems (NSDI), etc. While theory papers have their own idiosyncrasies, we argued that ALENEX is much closer in spirit and paper structure to more experimental venues like the ones listed.3. Limited burden on authors and reviewers for an experimentThere was no real logistical burden. We were not blocking people from posting on the arXiv, or talking about their work. We’re merely requiring submissions be blinded (which is easy to do). For reviewers also, this is not a problem - typically you merely declare conflicts based on domains and that takes care of the problem of figuring out who’s conflicted with what paper (but more on this later).4. Prototyping.While theoryCS conferences in general do not make use of double blind review, ALENEX is a small but core venue where such an experiment might reveal useful insights about the viability of double blind overall. So we don’t have to advocate changes at SODA/STOC/FOCS straight up without first learning how it might work.5. PC submissions.We are allowing PC members to submit papers, and this has been done before at ALENEX. In this case double blind review is important to prevent even the appearance of conflict.The processBefore submission: We provided a submission template for authors that suppressed author names. We also instructed authors on how to refer to prior work or other citations that might leak author identity - in brief, they were asked to treat these as any third-party reference. We also asked authors to declare conflicts with PC members. After submission/before reviews: We recognized that authors might not be used to submitting articles in double blind mode and so went over each submission after it was submitted and before we opened up bidding to PC members to make sure that the submissions were properly blinded. In a few cases (less than 10/49) we had to ask authors to make modifications. During review process: The next step was to handle requests for subreviewers. Since PC members could not determine CoIs (conflicts of interest) on their own, all such requests were processed through the PC chairs. A PC member would give us a list of names and we would pick one. (so more private information retrieval than a zero knowledge protocol!)IssuesA number of issues came up that appear to be unique to the theory conference review process. We[...]2017-10-22T23:49:41.985-06:00
This past Friday, I gave a lecture on cake cutting algorithms at the Timpanogos Women's Facility as part of a lecture series organized by my Utah colleague Erin L. Castro and her Utah Prison Education Project. The project's mission is to... provide quality, sustained, and meaningful higher educational opportunities to individuals incarcerated in Utah state prisons. Through embodying the mission of the University of Utah, the Project assists incarcerated students and non-incarcerated volunteers to live lives of impact, both in prison and post-incarceration, by fostering leadership, civic engagement, and critical inquiry. UPEP aims to create lasting impact in our state and our communities by investing in people and providing them the tools necessary for empowerment and lifelong learning.I think this is incredibly important work. We don't need to get into a much larger discussion about rehabilitation versus punishment theories of justice to appreciate how providing access to education might allow incarcerated students the ability to turn their life around, or even find opportunities for work once they leave prison so that they have a way to support themselves without falling back into criminal activities. Maybe the amount of education they get in prison might even one day be a predictive factor in deciding whether they will reoffend!
2017-09-29T09:00:34.916-06:00
I'm a perpetual student when it comes to my guitar-playing. I started off learning acoustic guitar, and taught myself a little bass in college. When I was in the college band our music advisor played some classical guitar and that got me hooked.I've had a number of teachers through grad school and beyond, but I've always plateaued out at a level where I'm competent but no better. At some point I realized that what motivated me to play was the right kinds of music (this I also learnt when watching my children learn an instrument), and that inexorably led me to my new quest: learning flamenco guitar.Flamenco is a very passionate style of playing - and classical guitar can seem bloodless and sedate in comparison. It also requires many different right hand techniques that are not common in classical guitar problem.The net result is that I'm back to being a beginning student again - struggling with mechanics, hand position and note playing. It's a lot of frustration with the occasional moment of transcendence. I whine at my teacher in the way students whine at me, and he's sneaky enough that now he just asks me "so what would you tell your own students" and that shuts me up.Which brings me to the point of this post (what??? posts need a point?). We spent a lesson last week talking about extracting expression and feeling from the instrument. I kept asking him about what tools I could use (beyond the usual tone control by moving up and down the fretboard and using volume) to express more emotion, and what emotion that would be. His response was first to show me this beautiful video of an interviewer "talking" to Paco De Lucia's guitar width="320" height="266" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/KLtlM5r8rdc/0.jpg" src="https://www.youtube.com/embed/KLtlM5r8rdc?feature=player_embedded" frameborder="0" allowfullscreen>and then explain to me that I have to dig deep within myself to find the way I can relate to the music. And then it hit me (painfully). Aditya Bhaskara and I are running a theory seminar on reading classic theory papers where (much like my previous seminar) there's a strong emphasis on getting to the core ideas and intuitions that drive a result. I'm constantly exhorting students (even more so than Aditya - I think it's interesting to see how different people absorb messages from a paper) to find the core intuition in the paper and be able to express it in a short "story" almost. And that's essentially what my teacher is exhorting me to do. In both cases, the expert is trying to get the student to transcend the purely mechanical aspects of (reading the paper/playing the instrument) and get to the deeper (mathematical/emotional) truth of the (paper/piece). And it's hard precisely because the student in either case is still struggling with the mechanical, and doesn't yet have the facility with the tools to let them fall away. Does this mean I'll be a more enlightened teacher? I doubt it :). But I do have a little more sympathy for my students. [...]2017-09-29T00:02:06.205-06:00
After the infamous Google memo (and frankly for much longer if you work in algorithmic fairness), the idea of something being a "social construct" has popped up again, and I will admit that I've struggled with trying to understand what that means (damn you, focused engineering education!)Ta-Nehisi Coates' article about race is a short and excellent read. But I also want to highlight something much closer to home. BYU Radio's Julie Rose did an interview with Jacqueline Chen (at the U) on her recent work on perceptions of race in the US vs Brazil.The interview is here (and it's short - starting at around 20 minutes in) and in it Prof. Chen very masterfully lays out the way in which race is perceived and how it changes based on changes in context. The interview is based on a recently published paper ($$).One important takeaway: the way in which one's racial identity is perceived varies greatly between the US (which appears to be influenced by parental information) vs Brazil (where skin color appears to be the dominant factor). More importantly, the idea of race as immutable vs changeable, a categorical attribute versus a continuous one, all vary.And that's what we mean by saying that X (here, race) is a social construct. It's not saying that it's fictitious or less tangible. But that it's defined by the way we talk about it in society.Why is this important? When we collect data as a way to predict behavior, we're making an implicit claim that behavior can be predicted (and explained) by intrinsic and often immutable descriptors of an individual. We use (or don't use) "race" as a feature when building models.But this itself is a huge assumption! It assumes that we can intelligently ascribe features to individuals that capture these notions, and that they are defined solely by the individual and not by context. The brilliant Medium article about the paper that claimed to predict criminality from facial features makes this point very well.But do we capture the entire history of environmental factors that make up the story of an individual. Of course not. We essentialize an individual into a collection of features that we decide captures all their relevant traits for the purpose of prediction, and then we build a model that rests on this extremely problematic idea.Much of the work I do on fairness can be reduced to "check your data, and check your algorithm". What we're also thinking about (and that directly speaks to this issue) is "check your features".It turns out that way back in 1921, Walter Lippman had something interesting to say about all of this. From a longer essay that he wrote on the importance of frames as mediating how we perceive the world (and it says something about fake news and "true facts" as well):And so before we involve ourselves in the jungle of obscurities about the innate differences of men, we shall do well to fix our attention upon the extraordinary differences in what men know of the world. I do not doubt that there are important biological differences. Since man is an animal it would be strange if there were not. But as rational beings it is worse than shallow to generalize at all about comparative behavior until there is a measurable similarity between the environments to which behavior is a response. [...]2017-08-23T09:00:00.176-06:00
In the light of the wave of racist and neo-Nazi bile being slung around in Charlottesville and beyond, Karl Popper's Paradox of Tolerance has been doing the rounds. Paraphrased, it can be phrased asIn a tolerant society, one must be tolerant of everything except intolerance.There's an interesting self-referential element there that's reminiscient of Gödel's incompleteness theorems. To wit,
2017-06-23T13:28:36.840-06:00
I was the chair of the plenary session on Wednesday, so was too focused on keeping track of time and such to pay full attention to the talks. Having said that, all the speakers we've had so far have done a bang-up job of keeping within their time window without much prompting at all.2017-06-23T13:16:48.908-06:00
Minority report (the movie) is 15 years old. Who knew!2017-06-23T13:02:16.089-06:00
There's a weird phenomenon in the world of streaming norm estimation: For $\ell_0, \ell_1, \ell_2$ norm estimation, there are polylog (or less)-space streaming approximation algorithms. But once you get to $\ell_p, p \ge 3$, the required space suddenly jumps to polynomial in $n$. What's worse is that if you change norms you need a new algorithm and have to prove all your results all over again.2017-06-22T13:19:08.403-06:00
The above is an actual quote from Oded Goldreich in his hilarious speech accepting the Knuth Prize for 2017. This speech was highly anticipated, because most of us have spent years reading and marvelling at Oded's opinions (he addressed the elephant in the room very promptly)2017-06-22T13:06:46.597-06:00
(for various reasons, I don't have wifi access at my Airbnb, so my posts are delayed. But it's not like you're hanging on my every word...... are you?)2017-06-21T08:44:42.867-06:00
There was an interesting talk on developing spectral tools for directed graphs. The magic of spectral methods for undirected graphs comes from the interpretation of the graph as a Markov chain, and then using spectral properties of the Laplacian (which is symmetric and positive semidefinite) to reason about a host of properties like conductance, mixing times, sparsification, and others.Suppose our directed graph is Eulerian, which means that each vertex has the same incoming and outgoing degree. Then if we construct the associated Laplacian $L$ and symmetrize it as $L' = (L + L^T)/2$, then the resulting symmetric matrix $L'$ is positive definite!This turns out to be a key ingredient of designing efficient algorithms for estimating spectral quantities on directed graphs, and is a trick worth remembering.
2017-06-21T08:08:58.687-06:00
The plenary talk on Tuesday morning was by Avi Wigderson, on the nature of TCS.2017-06-21T08:07:00.657-06:00
Day 2 at TheoryFest, and people are still attending talks. Ok maybe I shouldn't be that surprised. But it's kind of nice to see anyway. The lounge area is deserted during the sessions and full during the breaks.2017-06-20T07:02:50.002-06:00
So far, the tutorials appear to have been well attended, The DL tutorial had a full house in a big room, but the other two tutorials did pretty well to. The plenary talks (the reason I'm here) start today and it will be interesting to see what kind of attendance we see.2017-06-20T06:55:54.346-06:00
(ed: I'm guessing you never thought those words would appear together in the same phrase)Ruslan Salakhutdinov gave a tutorial on deep learning today. Now deep learning is a tricky topic for theory (more on that below), but I thought he did a nice job in his two hours of explaining the basics of how a neural net works and how it's trained, without getting too far into engineering weeds, but also being able to explain important ideas like drop out, SGD, batch normalization and momentum. He skillfully avoided the alphabet soup of architectures in a way that didn't really affect one's understanding (I think). He didn't get too much into RNNs, but I think that was a conscious and fair choice. Discussing the unsupervised element of DL - autoencoders, RBMs, DBMs, and GANs. Now I have a little bit of an advantage here because we're running a summer reading group on GANs, but I liked his framing here in terms of supervised and unsupervised, as well as the different kind of generative criteria (probabilistic/not, tractable/intractable, explicit/implicit) used to classify the different approaches. In a first at STOC (maybe?) the audience got to see a video of an RL system learning to play Doom. It was pretty neat. Having said that, I'm not exactly the right audience for this kind of talk, since I'm decently familiar with deep learning. What surprised me though was that when I polled people during the breaks. most of the people who attended the tutorial felt the same way. And the common refrain was "We've heard so many faculty canddiates talk about deep learning that we know the basics now"!So I almost wonder if Russ miscalbrated the level of the audience. There was also some minor grumbling about the lack of clear open problems. I actually don't fault him for that. I think it might have been useful to expose core ideas for which answers don't exist, and some of these came out in the Q&A. But let me make a more general observation. Deep learning is a tricky topic for theoreticians to negotiate, for a number of reasons. firstly, I don't think it's even useful to ask the most general form of "what does a neural net DO" questions. Neural nets are very very general (in particular a 2-level neural net can approximate any function). So asking general questions about them is like asking to characterize a Turing machine with no constraints. You can't say much beyond recursive and r.e. I think ther right questions are much more specific. DL right now is very much an engineering discipline, which is to say that the practice of DL is focused on trying out engineering hacks that appear to get improvements. And these improvements are significant enough that it really doesn't matter why they work. In other words, DL doesn't need theory… at least now. Even if you don't grant the previous two positions, there's another issue. Descriptions of DL systems feel a lot like experimental physics: "hey we did all of this and it worked this way. Now give us a theory". With the difference that there's no "there" there: there's no fixed Nature that we can design theoretical laws against. Only a gazillion-dimensional highly nonconvex landscape where we don't even try to find a provably high quality answer. So I think we're on our own if we want to (or care to) understand the computational power and expressivity of neural networks. It's very interesting, and we're seeing nice results begin to appear, but we should do it because there's interesting theory to be had here, rather than trying to hew to close to actual DL systems. [...]2017-06-18T06:34:38.514-06:00
I'm at the airport getting ready to board my flight for Montreal to attend TheoryFest 2017. And much to my amazement, I discover that STOC has its own event mobile app. Who knows, maybe this means that by next decade theory conferences will do double blind review? (ed: stop it, now that's crazy talk!)2017-05-01T09:44:41.399-06:00
Certain memory aides are so ... well.. memorable that they stick in your mind exactly the way they should. Here are three that I've heard of:2017-04-19T00:51:25.368-06:00
(ed: I can't believe it's been four months since my last post. Granted, I've been posting over at algorithmicfairness.wordpress.com, but still. Time to turn in my theory card...)2016-12-12T04:44:49.711-07:00
I'm at IIIT Allahabad teaching a short course on fairness, accountability and transparency in machine learning. This is part of the GIAN program sponsored by the Government of India to enable Indian researchers from outside the country to come back and share their knowledge. The clever pun here is that GIAN -- which stands for Global Initiative of Academic Networks -- also means 'knowledge' in Hindi (ज्ञान).2016-10-20T08:00:17.417-06:00
(image)one consistent msg I heard re ethics/policy today is that even though no one really knows the right answer, the question is how to move fwd— Hal Daumé III (@haldaume3) October 19, 2016
2016-09-26T15:49:38.827-06:00
Sorelle Friedler, Carlos Scheidegger and I just posted a new paper on the arxiv where we try to lay out a framework for talking about fairness in a more precise way.2016-08-26T21:37:36.539-06:00
My student +John Moeller (moeller.fyi) just defended his Ph.D thesis today! and yes, there was a (rubber) snake-fighting element to the defense.