Subscribe: jonudell
http://channel9.msdn.com/posts/JonUdell/RSS/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
data  elevator  lot  microsoft  new  people  project  research  search  server  space elevator  space  technology  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: jonudell

JonUdell - Channel 9





 



Derik Stenerson on the past, present, and future of the iCalendar specification

Mon, 13 Oct 2008 13:30:00 GMT

Derik Stenerson first came to Microsoft on an internship as a Test Engineer on Microsoft Mail. After graduating, he joined Microsoft full time in the email group and worked in various roles on email and scheduling products, including Schedule+ and Exchange. His passion for calendaring and scheduling led to work on the iCalendar standard (IETF RFC 2445) and later on a hosted self-service scheduling solution for small businesses. For the past few years Stenerson has been exercising his other passion for user centered design while building features for Microsoft Dynamics CRM.

(image)
Derik Stenerson

Next month marks the tenth anniversary of RFC 2445. To celebrate the occasion, Derik joins Jon Udell on Interviews with Innovators to discuss the past, present, and future of the venerable iCalendar specification.

(image)



Scott Prevost explains Powerset's hybrid approach to semantic search

Fri, 26 Sep 2008 09:12:00 GMT

Scott Prevost is General Manager and Director of Product for Powerset, the company whose semantic search engine was recently acquired by Microsoft. In this interview he describes the history of Powerset's natural language engine, and explains how it works as part of a hybrid approach to indexing, retrieval, and ranking. Scott will expand on these topics in his keynote address at Web 3.0 in October. Scott Prevost JU: The notion of search enhanced by natural language understanding has a long history. I was just reading Danny Sullivan's rant about how he's been hearing about this for years, but it's never amounted to anything. Of course, people are all over the map on this topic, but nonetheless you guys are doing certain demonstrable things, and working on other things. So I'd like to find out more about how the technology -- which was acquired from Xerox, where it had been worked on for a long time -- actually works. What you mean by natural language understanding, how you're applying the technology, and where this is going. SP: Well, there are a lot of questions tucked in there, but maybe we can start with what we licensed from PARC, what was formerly Xerox PARC. They had been working for 30 years in a linguistic framework called LFG -- lexical functional grammar -- and they built a very robust parser. It's probably parsed more sentences than any other parser in the world. What it allows us to do is take apart every document that we index, sentence by sentence, uncover its linguistic structure, and then translate that into a semantic representation we can encode in our index. JU: Can you confirm or deny something that Danny Sullivan reported, which is that it takes on the order of two months to index Wikipedia one time using this method? SP: [laughs] That's a very, very old number. It all depends on the number of machines, but we do it now on the order of a couple of days. JU: And it scales linearly? SP: Yes. And in fact we're working really hard to bring those numbers down. We have a very small data center right now. We're looking at what it takes to stand up a 2 billion document index, and it's absolutely attainable. I think Danny Sullivan realized, when he wrote another article on the day we launched, that we're doing something different. He called us an understanding engine. It's not the case that we're just applying linguistic technology at runtime, by parsing the query and then trying to use the same old kind of keyword index for retrieval. We're actually doing the heavy lifting at index time. We're actually reading each sentence in the corpus, pulling out semantic representations, indexing those semantic representations, and then at query time we try to match the meaning of the query to the meaning of what's in the document. That allows us to both increase precision and improve recall. JU: When you say semantic representation, what it means -- or anyway what's evident in the current version -- is subject/verb/object triples, basically. That seems to be how things are organized. SP: That's one small part of what the engine does. It's the part we've exposed in the user interface in a very direct way. But actually those are only three of several dozen semantic roles that we uncover at index time, and all of those roles go into selecting documents, and snippets of documents, when we present the organic results. JU: Really? So even though the patterns aren't exposed in the advanced query interface, they're still used? SP: That's right, they're still used. JU: What would be an example of one of those other patterns, and how it's applied? SP: So, you ask a question like: "When did Hurricane Katrina strike?" The 'when' is a certain kind of semantic role that we've indexed, separately from the subject, verb, and object. There are a number of other roles like that: location, time, other types of relationships. JU: I saw a private demo, about a year ago, in which one of the most striking examples was something like: "Compani[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/prevost/prevost.wma




Kristin Tolle on biomedical initiatives at Microsoft Research

Thu, 18 Sep 2008 15:37:00 GMT

Kristin Tolle is the Senior Research Program Manager for Biomedical Computing for External Research in Microsoft Research. Projects run the gamut, she says, from "bench to bedside". In this interview she discusses two major biomedical initiatives: Cell Phone as a Platform for Health Care, and Computational Challenges of Genome Wide Association Studies. Kristin Tolle JU: Give us a sense of the kinds of biomedical projects you're working on internally, as well as those you're working on with external partners. I spoke with George Hripcsak, one of the researchers awarded a grant under the Computational Challenges of Genome Wide Association Studies (GWAS) program, and I know there are others involved there and in other programs as well. I'm interested in what Microsoft brings to the table in terms of helping these folks out with their computational and data management challenges, and also what kinds of things Microsoft learns from these engagements. KT: The different programs inside of External Research run the gamut from the devices and mobility space, for home health care and elder care, all the way to genome wide association studies. So, we fund projects all the way from bench to bedside. Because we're a software company, we'll focus on the IT parts, and there's a reason for that. These are often the parts that don't get funded elsewhere, or only get funded sparsely. Our purpose for going into medical funding was to fill those gaps. JU: And why do you think those gaps exist? KT: I think it's a misperception, by a lot of the funding agencies, that either something doesn't fall into their area, or that it's not as important as the actual research being done. The problem is -- and this is why we're funding this area -- you cannot do medical research without computing. You just can't. JU: Of course not. Areas that we've funded...well, the biggest RFP we ran this year was Cell Phone as a Platform for Healthcare, and that was 1.4 million dollars toward trying to reach rural and underserved communities with retro technologies like cellphones and televisions, because those are ubiquitous. JU: Oh, absolutely. I've spoken to Joel Selanikio, who was recently awarded a MacArthur Grant to use handheld devices for field data collection in the third world. It's a huge opportunity, though as you say it's the sort of retro technology that doesn't make people's eyes light up in Silicon Valley, they just don't see the opportunity the same way. KT: It's true that they don't. But interestingly we've got a lot of researchers in-house, whether we're talking about that situation or about genomics, who have a keen interest in working in these areas. So for example, we gave Fone+ devices to a couple of the people who were winners of that award. The Fone+, which was developed by Microsoft Research Asia, is a phone that sits in a cradle, it's got RGB out to a television set, and USB input ports for mouse, keyboard, etc. So basically it enables your phone to work like a PC. Now the beauty of this is, if you hook that up to a microscopy device that can do instant visualization of blood cells, determine whether or not somebody has malaria, and display that on a television screen, you've now just set up a lab for doing microscopy anywhere in the world there's a TV and a cellphone. Another example is something we did with Washington University in St. Louis. They're developing low-cost ultrasound probes. Same thing. They're USB out, and designed to work with laptops, but now with the Fone+ you can plug it into this little cradle and now you've got an ultrasound anywhere in the world where there's power, a TV set, and a cellphone. You can even control the ultrasound device from the phone itself, it's just an amazing technology. So that's an example where Microsoft Research has developed a technology that facilitates providing health care to rural communities. Although it wasn't initially designed for that, it was[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/tolle/tolle.wma




Roger Barga on Trident, a workbench for scientific workflow

Thu, 28 Aug 2008 17:41:00 GMT

Roger Barga, a principal architect with Microsoft's Technical Computing Initiative, is leading the development of Trident, a "workflow workbench" for science. In its first incarnation, the tool will enable oceanographers to automate the management and analysis of vast quantities of data produced by the Neptune sensor array. But as Roger explains in this interview, it's not just about oceanography. Every science is becoming data-intensive. Trident's graphical workflow authoring, reusable data transforms, and support for provenance -- the ability to reliably track and reproduce all the analytic steps leading to a scientific result -- is being used by astronomers too, and is expected to find its way into many other disciplines as well. Roger Barga JU: We're here to talk about the Trident, the scientific workflow workbench for oceanography. Give us the 50,000-foot overview, then we'll zoom in. RB: Scientists are increasingly dealing with large volumes of data coming from disparate sources. The process used to be manageable. You'd get post-docs to convert the raw data from the instruments into readable formats, there was a manual workflow to process the data into useful data products. JU: Those were the good old days. Or maybe not so good. RB: Right. Because the time to get from raw data to those useful products was often measured in weeks or months. But now our ability to capture data has outpaced our ability to process and visualize it. And its rising exponentially with the rapid deployment of cheap sensors. The oceanographic project we're working on, Neptune, is just one example of this. Astronomy, and all other sciences, are experiencing the same trend. JU: Neptune is a University of Washington oceanographic project ... RB: ... it's actually an NSF project. The proper name is Ocean Observatories Initiative, and it's being funded for several hundred million dollars. The University of Washington is one of the partners. Monterey Bay Aquarium Research Institute and a number of coastal observatories as well are involved. JU: So fiberoptic cables are being laid, and lots of oceanographic data will be pouring in. RB: Exactly. It's transformed oceanography from a data-poor discipline to a data-rich one. They're going to be able to monitor the oceans 24x7 over long periods of time. So the kinds of processes they can study were never within reach before. They could collect data when there was an episodic event, or when they could get funding. Now they'll be collecting permanently. JU: What's the scope of the sensor network? RB: They're laying the trench in Monterey to test and deploy the sensors. NSF is reviewing the larger program, and getting ready to fund the Neptune array which will be off the coast of Washington and Oregon. The Canadian version of the Neptune array is up and running and collecting data, but the software infrastructure is still being built as we speak. JU: What quantities of data is the Canadian array producing? RB: Gigabytes per day. It can easily handle a couple of high-def video streams coming from the ocean floor. JU: Really? RB: Yes. And also in-situ devices that can sequence organisms. It really is like not only taking Internet and power out to the ocean, but also a USB bus that instruments can be plugged into. JU: What are some of the experiments that become possible with this setup? RB: For example, being able to understand sediment flows across the ocean floor, how temperature and salinity change, how fresh water flows in from rivers, what kind of life exists at those margins. And understanding that interesting narrow band where life thrives in the ocean. Too high up and the tides affect it, too low and there's not enough light. But really, there are a myriad of things like that. JU: So an experiment, in this data-intensive new world, involves formulating a hypothesis, looking for patterns in previously-collected data, and then seeing whether dat[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/barga/barga.wma




Lewis Shepherd discusses the Institute for Advanced Technology in Governments

Thu, 14 Aug 2008 17:01:00 GMT

Before joining Microsoft's Institute for Advanced Technology in Governments, Lewis Shepherd spent four years at the Defense Intelligence Agency where he helped usher in a new era of collaboration. In this interview, he discusses how the Institute's small team of seven is exploring the nooks and crannies of Microsoft's research efforts and technology portfolios, looking for ways to help governments meet the diverse set of enterprise challenges they face. Lewis Shepherd JU: Microsoft's Institute for Advanced Technology in Government is a mysterious new organization that hasn't been heard from much. Readers of magazines like Government Computer News may have seen some notices about it, and may have noted that former CIA Assistant Director Jim Simon is the founder, and that it's attracted some other folks who formerly worked in government roles -- Aris Pappas from CIA, you from the Defense Intelligence Agency. But not much else is known. So, what's this all about? LS: Well, I'd say a better word than mysterious would be quiet. And that's because we're new and small. The Institute was set up by Bill Gates and Craig Mundie in 2004. They decided that Microsoft should play a more strategic role in the eyes of government. Actually, in our title, there's a final letter, S. It's the Institute for Advanced Technology in Governments, plural. We're not strictly focusing on the U.S. federal government, which the backgrounds of the people involved would imply. It's actually governments at all levels. We've worked a bit with state and local governments recently. In the past year we've increased our headcount to seven, and the seventh was an interesting addition. Bob Hayes is a British citizen, he lives and works in Cambridge UK, and he has experience at all levels of UK government. He began as a beat cop -- a bobby -- and has worked in and around the national security community in the UK for his entire career. JU: You folks have close ties to Microsoft Research, but don't consider yourselves to be formally a research unit. Or do you? LS: Not formally, but we do work closely with MSR, along with product groups. Jim Simon reports directly to Craig Mundie, so we have visibility into the entirety of strategic and future-oriented work that Microsoft is doing. Not just strictly MSR, but also incubation, Live Labs, Office Labs, forward-thinking people in various product groups. A lot of it is personal. We're just seven people, I joined just seven months ago. It's been a wonderful way to see inside this tiny little 90,000-employee company. JU: Governments are specialized kinds of large enterprises, so there are all sorts of potential applications for Microsoft's enterprise-oriented technologies. LS: That's a big part of it. The core mission is to assist our federal government, state governments, and eventually we hope local governments and NGOs, to focus on their enterprise-wide problems. Of which there are many. As bureaucratic organizations they're a lot like commercial organizations, but they have particular unique challenges that you probably don't really understand unless you've had the pleasure and frustration of working inside a large federal government organization. If you have, as we have, you really understand the pain, particularly within our national security community. The intelligence community, the Department of Defense, these are massive bureaucratic constellations of organizations. Five out of the seven in our group have some background in that national security community. I didn't have a career in it. But coming from a different kind of public sector background, and then a Silicon Valley background, I spent four years at the Defense Intelligence Agency where, post-9/11, I tried to bring some new thinking to the intelligence community. Along with a lot of other people, we were able to do some of that. Along the way, as I looked out at the diff[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/shepherd/shepherd.wma




Maurice Franklin reflects on the 2008 Space Elevator Conference

Fri, 08 Aug 2008 14:24:00 GMT

Maurice Franklin is a 12-year Microsoft veteran whose career has focused on performance engineering and server scalability. He's also passionate about the concept of a space elevator, and recently organized and hosted a conference held on that topic at the Microsoft Conference Center. In this interview he discusses reasons to build a space elevator, and describes how the concept, first proposed by Arthur C. Clarke, is evolving toward a practical implementation. The transcript for this interview appears below. Audio is available at ITConversations. In a related interview Ted Semon, author of the Space Elevator Blog, reflects on the conference and on the goals and status of the effort. Maurice Franklin JU: Maurice, most folks don't know even that there's a serious plan to build a space elevator, and I'm sure even fewer know that those most closely involved gathered this past week for a conference hosted at Microsoft. How did that happen? MF: Well, I'd qualify the word "plan" ... JU: Maybe we should call it an intention. MF: That's a good word. Or aspiration. So, I got involved in the space elevator world in 2002 when I discovered, quite by accident, that there was a conference in Seattle. It was hosted by an entrepeneur, Michael Lane, and a scientist, Bradley Edwards. Dr. Edwards is the father of the 21st-century concept of the space elevator. He'd heard it was impossible, and didn't believe that, so he got a NASA grant, and came up with something that made everybody say: "Well, that's not how we were thinking about it at all. That's a decades plan instead of a centuries plan." JU: What was different? MF: I'd read a 1997 NASA study, it was big science and big engineering. It relied upon a spacefaring technology to get to a space elevator. Those of us who are fans of the space elevator see it as a bootstrapping mechanism to get to that spacefaring technology. So, for example, one of the prerequisites assumed in the 1997 study was the ability to move an asteroid into earth orbit. And then to put a manned carbon-nanotube-manufacturing station in orbit. Well, you can't do any of that unless you have a space elevator. As much as anybody has, Dr. Edwards cracked the chicken-and-egg problem. His proposal requires on the order of four or five heavy launches. After that, it self-bootstraps -- if the materials come along, and a lot of other ifs, but that's a game-changing proposal. The NASA study also required a massive elevator, because there were going to be maglev trains running at high speeds, and that just adds more and more weight. His plan is much more modest, it only goes 200 kilometers per hour, it's a bit slow, but it's practical, you could reasonably get stuff up pretty quickly. And he added in remote power beaming, so you don't have to carry fuel for those days of climbing up towards geosynchronous orbit. In all respects, it's much more practical. If we could get past the technical hurdles, it's something in our lifetime, or at least those of us who are older hope so. JU: So your role at Microsoft was then, in 2002, and is now... MF: I'm a performance engineer. No connection with space-related activites at all, just a personal interest. Although that 2002 conference was invitation-only, I showed up. I'd read the NASA study, I'd read Brad's study, I had intelligent questions, and I was accepted, which was very cool. JU: So how did Microsoft come to be the sponsor and host of the 2008 conference? MF: One of the collaborators on this project is Dr. Bryan Laubscher. He's an astrophysicist, most recently with Los Alamos National Laboratory. About a year ago he was invited to the visiting speaker program at Microsoft Research. I went to hear him, gave him my card, and invited him to contact me if there was any way I could help. About a month later he got in touch and said, &quo[...]



Ted Semon reflects on the 2008 Space Elevator Conference

Fri, 08 Aug 2008 14:21:00 GMT

Ted Semon, a retired software engineer, chronicles the efforts to develop a space elevator on the Space Elevator Blog, and volunteers for The Spaceward Foundation which administers competitions to develop several of the core technologies that will be needed to build the elevator. Ted attended and spoke at the 2008 Space Elevator Conference held at the Microsoft Conference Center in Redmond. In this interview he discusses the concept of the space elevator, and the status of current efforts to bring it to life. In a related interview, Maurice Franklin, the Microsoft employee who brought the conference to Redmond this year, reflects on the conference and on the goals and status of the project. Ted Semon JU: How did you become interested in the space elevator? TS: I've always been a science fiction fan, and I read Arthur C. Clarke's Fountains of Paradise many years ago. The idea of the space elevator seemed so obviously the right way to get up out of Earth's gravity well. When I retired from the software world a few years ago, I decided to learn what was happening with the concept. There were blogs and websites, but nothing coherent, so I decided to pull the information together myself on the space elevator blog. JU: At this point were we into the modern era of the development of the concept? TS: Yes, this was in 2006. I'd read the Brad Edwards book, but it was hard to find out what was currently going on, so I started the blog. JU: The concept as described by Clarke is quite different from the modern one that's emerging, right? TS: In some ways yes, in some ways no. JU: Can you spell out the differences? TS: OK, there are several. He had located his port on the island of Sri Lanka. Current thinking is that it won't be a land port, it'll be an ocean-going port, so you can move the space elevator if you need to, and get it out of the way of satellites and other things in orbit. JU: I gather that's a "when", not an "if". TS: Exactly. There's stuff up there, it's going to intersect the elevator, you've got to deal with that. JU: And the object being moved, just to be clear, is a 100,000 kilometer strand of material. TS: Right. It's a carbon nanotube tether, or rope, or ribbon, whatever you want to call it. One end is anchored to an earth port, something like an ocean-going oil platform, and the counterweight ands at 100,000 kilometers up. By moving the ocean-going platform you can induce a wave that travels up the ribbon. You know which objects in space to worry about, at least the big ones, because you track them. And you know what's going on with the ribbon because you have sensors embedded in it, and climbers going up and down that signal their location. So you should be able to always move the ribbon out of the way of a collision. JU: So one difference from Clarke's original vision is that the platform is mobile and sea-based. What are some other differences? TS: Cost. He had imagined the cost would be something like the Earth's combined gross national products for a year, or some enormous number like that. The number now that looks more realistic is on the order of 10 billion dollars. JU: And what accounts for that lower estimate? TS: More knowledge now about how it's going to be built. JU: Maurice Franklin and I discussed this, and his take was that the Clarke scenario assumed a huge mass parked in geosynchronous orbit, and that mass would be very expensive to lift. That ties into another evolution of the concept, which is that it's not now anchored with a large mass at 22,000 miles, but extends far beyond that. TS: Right. Something like 100,000 kilometers. The counterweight in the Edwards plan is about 600 metric tons, quite a bit smaller than the Clarke scenario. JU: The reason that's possible is? TS: Because it's farther out in orbit. W[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/semon/semon.wma




How Microsoft's External Research Division works with a new breed of e-scientists

Thu, 31 Jul 2008 16:49:00 GMT

Tony Hey, VP for the External Research Division within Microsoft Research, leads the company's efforts to build external partnerships in key areas of scientific research, education, and computing. He's been a physicist, a computer scientist, and dean of engineering, and for five years ran the UK's e-Science program. These experiences have given him a broad view of the ways in which all the sciences are becoming both computational and data-intensive. Microsoft tools and services, he says, will support and sustain the new breed of scientists riding this new wave. Audio: WMA, MP3 Tony Hey JU: For this series of interviews I've spoken to a number of Microsoft folks who are working with external academic partners on projects that fall under your purview. The list includes Pablo Fernicola's Word add-in for scientific publishing, Catharine van Ingen's collaboration with Dennis Baldocchi at Berkeley on the analysis of C02 data, and Kyril Faenov's HPC++ project to bring cluster computing to the classroom. These are all pieces of your puzzle, right? TH: Absolutely. JU: By way of background, you've been a physicist, then a computer scientist, and then for a time led the UK's e-science program. TH: Which would be called cyberinfrastructure in the US, yes. I'm on the NSF's advisory committee for cyberinfrastructure, it's a very similar goal. JU: And then you surprised a lot of people by joining Microsoft. Take us through your initial role leading the TCI [technical computing initiative] and on to your current expanded role leading MSR's external research efforts. TH: Right. So having been a physicist, and then a computer scientist working on parallel computing for years, and then chair of my computer science department and then dean of engineering, I think I understand the community we're trying to work with pretty well. Also, as you mentioned, I worked for 5 years running the UK e-science program. That was about huge amounts of distributed data, and collaborative multi-disciplinary research in a variety of fields. The environment, bioinformatics, almost every field of science now has some element of distributed and networked collaboration. The science agenda was for the tools and technologies to make that collaboration trivial, just as with Web 2.0 your grandmother can do a mashup. I don't think the UK e-science program achieved that, but I do believe that Microsoft can help make tools and technologies available that will help scientists and researchers do their work. JU: In your parallel computing phase, you helped write the MPI [message passing interface] specification, correct? TH: Yes. I've been in this for 30 years, on and off. I have very good friends in the high-performance and parallel computing communities here in the US, and I was involved in European projects. There was a danger that the Europeans would go one way, and the US another, so it was time to see if we could get the community to put together a community standard. It isn't an ISO standard, there wasn't a big standards body, it was a group of experts who got together with the academics and with the industry players. Rather a small set, and we used to meet every 6 weeks in Dallas airport, so you really had to be dedicated to go there. JU: [laughs] TH: But what came out of it was a standard which has stood the test of time. I co-authored and initiated the first draft. It's been much changed since then, and I don't take credit for the final thing, but I did try, with Jack Dongarra, to initiate the standards process, and I think I remember buying the beer at the first session. JU: What's interesting to me is that despite that, you've been a vocal skeptic regarding raw grid capability. And you've been very careful to stress that in your view, the real challenges have to do with data -- the ability to comb[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/hey/hey.wma




How the WorldWide Telescope works

Mon, 14 Jul 2008 09:25:00 GMT

Jonathan Fay is principal developer of the WorldWide Telescope. In this interview he explains how the project has yielded not only a breakthrough software product, but also a reference model for the acquisition, transformation, and visualization of astronomical data. You'll learn not only how the WorldWide Telescope works, but also why it exists: To fulfill the education mission discussed in a related interview with Curtis Wong and Roy Gould. Jonathan Fay JF: As long as I've been doing computers, going back to the early 1980s on TRS-80, graphics, and visualization of data and the earth and space, were interests of mine. I'd gotten a department-store telescope one year for Christmas, and loved looking at stuff through the light-polluted LA skies. JU: So you were in the same boat as Curtis Wong? JF: Yeah, you could really only see planets and the moon in any detail. But I was passionate about computers and astronomy. Every time computers got more powerful, I'd look into visualizing the Mandelbrot set and the stars as a litmust test. In 2001 I was development manager for HomeAdvisor, and we were assimilating a research project called TerraServer. Tom Barclay, a researcher who was working with Jim Gray, said, "Hey, USGS has this DEM -- digital elevation model -- data that they'd like me to load into TerraServer. I wonder if you have ideas about what we could do with it." I'd been very much into 3D visualization. I have this program called LightWave, which goes back a long time but is now used for things like Serenity and BattleStar Galactica, so I started taking TerraServer images and USGS data and creating hills with texture-mapped images. Then Tom Barclay told me how NASA was using satellite weather data, watching over many days, and getting rid of the clouds so you could see the surface of the earth. They called it the Blue Marble project. I found and downloaded that data, and also some global digital elevation data, and starting creating a hierarchical 3D view of the earth so you could zoom in and browse. Then I worked to bring that into TerraServer, because we had resolution down to a couple of meters. But this was just a side project, and there wasn't interest in developing it, so I decided to look into visualizing other astronomy data. JU: This was around the time in 2002 when Jim Gray and Alex Szalay published their paper entitled the World-Wide Telescope? JF: Right. Jim talked about TerraServer "pointing up" as the next thing. He was already getting himself embedded with astronomers. I didn't see much of that. Tom was babysitting TerraServer while Jim went off into the astronomy end of things, and I was still doing geo, so we weren't collaborating. After having made some demos, a lot of people thought it was cool, but that was all. So I kept that on the back burner, and moved into some other groups. At the same time I was building my observatory. In Seattle, you take pictures when you can. If you can't push a button and have your observatory open up and take images when you get clear skies, by the time you set up you'll be clouded in. I wanted to automate the whole process, including image processing. That introduced me to the whole pipeline of data collection, processing, and subsequent research. Although I'm an amateur, I had to drill into the world of data and image processing that professional astronomers had to deal with. I was using the same resources. JU: I'd like to hear more about that. A lot of us are aware that those data and image resources exist, but it's really unclear how to make use of them. JF: You know, there is a lot available, but most amateur astronomers had no idea it existed, it was very hard to get to, and even the scientists had a hard time getting access to it. Essentially it was locked up in sil[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/fay/fay.wma




The story of the WorldWide Telescope

Fri, 20 Jun 2008 12:18:00 GMT

The WorldWide Telescope was first shown to the public at TED 2008, in a joint presentation by project leader Curtis Wong, manager of Next Media Research for Microsoft, and Roy Gould, a science educator with the Harvard-Smithsonian Center for Astrophysics. In this interview they discuss how -- and why -- the WorldWide Telescope combines many sources of astronomical data and imagery to create a seamless view of the night sky. Curtis Wong Roy Gould CW: I was interested in astronomy as a kid, but when you grow up in Los Angeles, the odds of seeing the Milky Way are pretty slim. I think the only time it happened recently was during the quake when the whole city lost power. It wasn't until much later that I actually got to see the Milky Way, and other objects I'd seen pictures of, and it was really quite a transformative experience. I always wanted to figure out how to recreate and share that experience. Early on, in the 80s, I made a little HyperCard stack called MacTelescope, which was my attempt to create the experience of looking at the sky, and then -- if you could manage to see the Milky Way -- to zoom into a section where there are all these interesting globular clusters and nebulae, if you know where to look. JU: So there was already the idea of taking people on a guided tour. CW: Exactly. Later I moved from the Voyager company to a company called Continuum, a little think-tank organization started by Bill Gates. The company was thinking about authoring tools and media. The project I wanted to do was called John Dobson's Universe. John Dobson was a physical chemist at UC Berkeley who got drafted to work on the Manhattan Project. He was there to do the chemistry for tuballoy, which was the code word for uranium. He became a conscientious objector, left the project -- which was pretty hard to do -- and became a Vedantan monk. Then he became interested in looking at the sky but, being a monk, he didn't have any money for a telescope. He knew that San Francisco shipyards were sources of glass discs, but they were too thin. Conventional wisdom said that you need thick glass to be able to grind a mirror. But he defied wisdom and found a way to use round porthole glass. He also came up with an ingenious way of mounting the telescope, using cardboard concrete form tubes. His design is now one of the most common designs for telescope mounts in the world. He also created an organization called San Francisco Sidewalk Astronomers, where people who have telescopes are encouraged to take them out into the public and show people what's up in the sky, and explain what's going on. John spent a lifetime in national parks, and in San Francisco, talking about astronomy to the general public. He was really good at taking complex ideas and conveying them to the public. I remember once when he was showing the Andromeda galaxy, there was a picture with the galaxy in the background which looks like a kind of fog, with a lot of stars in front, and he said: "The stars you're looking at in front are like raindrops on your window, looking at a distant cloud." So anyway, we started that project, got about halfway, then other things happened and it got cancelled. JU: It's a great story! CW: So I've thought about astronomy for a long time. At Microsoft, I heard a talk by Jim Gray who was applying his database expertise to astronomy. JU: I looked up a paper that Jim published with a guy from Johns Hopkins, Alex Szalay, and it's called The World-Wide Telescope, but interestingly, the title also includes the phrase: An Archetype for Online Science. The idea is that, not just in astronomy but in all of science, we're getting to the point where there's less direct observation, and more collection and [...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/wong-gould/wong-gould.wma




Making sense of electronic health records

Thu, 12 Jun 2008 09:22:00 GMT

My guest for this week's Perspectives show is George Hripcsak, professor of biomedical informatics at Columbia and one of six researchers recently funded by Microsoft Research through its Computational Challenges of Genome Wide Association Studies (GWAS) program. George Hripcsak JU: For starters, what is a genome wide association study? GH: A genome wide association study involves scanning markers across the human genome to find genetic variations associated with certain diseases. JU: Specifically what's being looked for is single-nucleotide markers, right? GH: Yes. Now our role in this project is the phenotype. We're trying to address the phenotypic computational challenge. Often it's simple. Someone has diabetes or doesn't. Or two people have it, but one gets complications and the other doesn't. JU: So by phenotype you mean the expression of diabetes, in this case? GH: Yes. Often you start with a disease, and some number of patients, often very small, but up to several thousand, plus a control group of patients without the disease. You study the entire genotype, and you look for which sites on the genoome are associated with that disease. Then you look into that site. Now the fact that you may find a certain genetic mutation at that site -- that's not necessarily the cause of the difference between the two sets of patients. The cause may be something near that marker on the genome. So you might sequence that area, looking for other information about what genes are in the area, and so on. JU: So the computational challenge is one of correlation. GH: The first step is correlation. But...I'm at the zeroth step. There are other people working on the part I'm describing here. First they'll come up with associations, which is a computational challenge in its own way, because there is a vast number -- a hundred thousand, someday a million -- markers that you're looking at, to see if they're associated with this trait, diabetes or not diabetes. Then they need to figure out what proteins are coded at the marker, or near the marker. In order to get to that point you need the phenotype too. As long as it's something simple, like patients with or without diabetes, it may seem like that's the easy part of the experiment. But as time goes on, and genotyping gets easier and cheaper -- and as we learn to handle patient privacy, that's the other thing that limits the study, we have to be careful about how we collect and store these data -- the hard part is going to be collecting the phenotype. JU: When you say "collecting the phenotype" -- that's clinical observation and description? GH: Exactly. Imagine that every patient who comes into the hospital is given the option to participate in a trial where, in a secure fashion, their genotype is done, and then their information can be used to discover new things about disease. Some number of patients would agree to that, and then all you have to do is take their blood samples, check the DNA, and genotype it. It's relatively straightforward if you have the money to do it. Then you have to find out what the phenotype of the patient is. But what questions should you ask? We don't know what diseases we might be studying, or what we might discover. We want to know the whole medical course of this patient: When they've been well, and when they've been sick. What we have for these patients are their electronic health records. And in the future, with Microsoft HealthVault for example, we have the personal health records. And so the question is, with the patient's permission, can we use those data to come up with a reliable phenotype? JU: OK. Now I see how this ties into your career history. You've done a lot of work in the area of[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/hripcsak/hripcsak.wma




How Mercy Corps syncs databases in Afghanistan

Thu, 29 May 2008 13:34:00 GMT

My guests for this week's Perspectives show are Barbara Willett and Nigel Snoad. Barbara works for Mercy Corps in Afghanistan, as the design, monitoring, and evaluation manager for a number of agricultural development programs. Nigel Snoad is a lead capabilities researcher for Microsoft Humanitarian Systems. Together they've pioneered the use of FeedSync as a way to synchronize data collection and reporting in an environment where Internet connectivity is spotty, and where lightweight, two-way synchronization is essential. Nigel Snoad and Barbara Willett JU: Barbara, we want to discuss the database synchronization system that you've partnered with Nigel to develop, as part of the Mercy Corps work in Afghanistan. BW: I'm the design, monitoring and evaluation manager here in Afghanistan. I arrived last year in March, about the same time we had a consultant in doing a general review. He also brought another consultant who'd worked with Mercy Corp on technical issues, including the development of databases. JU: And can you explain what, in this context, is being designed, monitored, and evaluated? What are the programs you're supported, and what do those programs do? BW: In Afghanistan, almost all our programs revolve around agricultural development. We have a number of programs funded by USDA, the British Government, the European Commission, all with the same goal of improving the livelihoods of the Afghan people. JU: Is your microfinance program among those? BW: Yes, we have a microfinance program, but it's one of our older ones, and it's self-sustaining now, no longer administered directly by our monitoring and evaluation system. JU: What are some examples of programs that are? BW: The ABAD [Agro-Business and Agriculture Development] program is where all this started. It supports business development, and technical capacity training for farmers. There's also a lot of work in animal health, redeveloping and reestablishing veterinary field units, and training veterinaries, para-veterinaries, and female livestock workers. JU: So your management challenge, relative to these programs, is what? BW: It's developing tools that are applicable to multiple programs doing the same kinds of things. Everybody's involved in agricultural development, and interested in measuring improvement in sales and production. It's a challenge to collect that data and share it -- both operational data and impact data. JU: So the field workers are in various locations around the country, with intermittent Internet access? BW: Right. And in these circumstances, we want to standardize how we collect, synchronize, share, and report on this information. JU: You had a pre-existing system based on Microsoft Access, as I understand it, and there were problems synchronizing those databases to your central office. BW: Initially we didn't have Access, actually. When I arrived there wasn't any centralized system at all. Everything was Excel-based, sharing spreadsheets month-to-month from this region to that region. So we started the Access system, then later we realized it wasn't really working out because of the Internet problems, and because the process was bulky and cumbersome. JU: Nigel, that's where you come in, right? NS: Yes. Our humanitarian team was in Afghanistan looking to talk with people, do a bit of show and tell, and mainly get feedback about what people would like, and what they really need. And then use that to iterate what we were doing, and look for partnerships to do pilots. With Mercy Corps we said, here's what we've got, here's what we're thinking, does that make sense to you? Mercy Corps was great for that, because they were fairly well advanced in their thinking about how they we[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/mercy/mercy.wma




Digital formats for long-term preservation

Thu, 22 May 2008 12:42:00 GMT

Caroline Arms is an information technologist who came to the Library of Congress to work on the American Memory project. The challenge of preserving digital content captured her interest, and her work since has focused on understanding and promoting formats that raise the probability that content will be usefully available to future generations. She is the co-compiler, with Carl Fleischhauer, of the Digital Formats website, and a member of the committee to standardize Office Open XML. Caroline Arms JU: I'm interested in your perspective on XML's role in the preservation of documents for the long term. CA: I'd like to be able to go broader than XML. It's one aspect, but it's not the only answer. When we're talking about the challenge of preserving digital content we usually think more broadly. JU: Great point. Of course there's a whole range of issues, from how you keep the disks spinning too...well, let's step back and talk about acid-free paper, which may be a more durable format than anything we've done electronically. CA: Absolutely. JU: So, OK, give us the broad view of how you have approached this problem at the Library of Congress. CA: The Library's mission is to make its resources useful and available to Congress and to the American people, and to sustain and preserve a universal collection of knowledge and creativity for future generations. Congress funded the National Digital Information Infrastructure and Preservation Program (NDIIPP), and I've been working as part of that since the early 2000s. The program looks for every opportunity to raise the probability that content created today will be usable by those future generations. I first came to the library to work on American Memory, which was digitizing out-of-copyright materials and making them available to everybody. JU: Of course that project isn't just a resource for future generations... CA: Right. So, there are many ways to think about raising that probability. The program is trying to build a network of organizations committed to the stewardship of digital content. Not just traditional libraries and archives, but certainly including them. You mentioned spinning disks. We try to have conversations with storage vendors, and try to explain how we see the requirements for long-term cultural archives as being a little different from those for business continuity. You also mentioned acid-free paper. In the book age, we can take in a book make sure it's on acid-free paper, and it will still be there a hundred years from now. The phrase "benign neglect" gets used. Paper survives benign neglect. Digital content doesn't. JU: It's a paradox. Recently I visited my parents, and we found a box of correspondence they had written from a yearlong trip to India many years ago. I realized that my own correspondence is probably a lot less likely to be available to available to my kids or grandkids. CA: I have exactly the same experience. My father was away at the battlefront in World War II, and he wrote as frequently as he could. My mother still has all those letters. Today's forces are using email and cellphones and other ephemeral means of keeping in touch. It's amazing to read the letters discussing what my name would be, because I was on the way. JU: Of course once that box of letters is lost, it's lost. There is no backup, there are no perfect copies. It's a paradox that we're in era when you can make perfect copies, and distribute them as widely as you want, so you'd think that superabundance would save the day, but that's not necessarily true. CA: No. You have to act at the time of creation in order to up the probability. This is true for your o[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/loc/caroline.wma




Where is WinFS now?

Thu, 15 May 2008 13:20:00 GMT

WinFS was an ambitious effort to embed an integrated storage engine into the Windows operating system, and use it to create a shared data ecosystem. Although WinFS never shipped as a part of Windows, many of the underlying technologies have shipped, or will ship, in SQL Server and in other products. In this interview Quentin Clark traces the lineage of those technologies back to WinFS, and forward to their current incarnations. Quentin Clark led the WinFS project from 2002 to 2006. He's now a general manager in the SQL Server organization. JU: You made a fascinating remark last time we spoke, which was that most of WinFS either already has shipped, or will ship. I think that would surprise a lot of people, and I'd like to hear more about what you meant by that. QC: WinFS was about a lot of things. In part it was about trying to create something for the Windows platform and ecosystem around shared data between applications. Let's set that aside, because that part's not shipping. JU: So you mean schemas that would define contacts, and other kinds of shared entities? QC: Yeah. That's a mechanism, a technology required for that shared data platform. Now the notion of having that shared data platform as part of Windows isn't something we're delivering on this turn of the crank. We may choose to do that sometime in the future, based on the technology we're finishing up here, in SQL, but it's not on the immediate roadmap. JU: OK. QC: Now let's look under the covers, and ask what was required to deliver on that goal. It's about schemas, it's about integrated storage, it's about object/relational, a bunch of things. And that's the layer you can look at and say, OK, the WinFS project, which went from ... well, it depends who you ask, but I think it went from 2002 until we shut it down in 2006 ... what was the technology that was being built for that effort, in order to meet those goals? And what happened to all that stuff? You can catalog that stuff, and look at work that we're doing now for SQL Server 2008, or ADO.NET, or VS 2008 SP1, and trace its lineage back to WinFS. JU: Let's do that. QC: OK. I guess we can start at the top, with schemas. We're not doing anything with schemas. At the end of the WinFS project we had settled on a set of schemas. It was a very typical computer science problem, where the schemas started out as a super-small set of things, and then became the inclusion of all possible angles, properties, and interests of anybody interested in that topic whatsoever. We wound up with a contact schema with 200 or 300 properties. Then by the time we shipped the WinFS beta we were back down to that super-small subset. Here's the 10 things about people that you need to know in common across applications. But all that stuff is gone. The schemas, and a layer that we internally referred to as base, which was about the enforcement of the schemas, all that stuff we've put on the shelf. Because we didn't need it. It was for that particular application of all this other technology. So that's the one piece that didn't go anywhere. Next layer down is the APIs. The WinFS APIs were a precursor to a more generalized set of object/relational APIs, which is now shipping as what we call entity framework in ADO.NET. What's getting delivered as part of VS 2008 SP1 is an expression of that, which allows you to describe your business objects in an abstract way, using a fairly generalized entity/relationship model. In fact we got best paper at SIGMOD last year on the model, it's a very good piece of work. So you describe your business entities in that way, with a particular formal language... JU: For people w[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/winfs/winfs.wma




OpenSearch federation with Search Server 2008

Thu, 01 May 2008 14:59:00 GMT

With the new OpenSearch-based federation capability in Search Server 2008, you can integrate any external search service that can expose results as an RSS feed. In this podcast Jon Udell discusses search federation with Richard Riley and Keller Smith. Richard Riley is a Senior Technical Product Manager for Microsoft Office SharePoint Server 2007. He is responsible for driving Technical Readiness both within and outside of Microsoft and specializes in the Enterprise Content Management and Search features of the product.   Keller Smith is a Program Manager in the Business Search Group at Microsoft. He designs and manages new enterprise search features in the areas of Federation and End-User UI. His passion has always been to improve the lives of users through exciting new ideas in software. Links Enterprise Search Blog Search Gallery Location Definition File Schema Q: What's the lineage of this search server? A: The technology that was built into Index Server, way back in the NT4 option pack, has grown and diversified into various products, including desktop search and SharePoint. They've split apart now, but the common DNA is there. Q: What differentiates this search server from its predecessor? A: We found that customers wanted to use the search capability without buying the whole SharePoint product. So we split the search features into Microsoft Office SharePoint Server for Search. People could buy that and use the search features without the full MOSS functionality. Search Server is the next version of that. Q: What were the domains over which MOSS 2007 could search? A: Anything you could crawl. Out of the box, SharePoint plus other content sources we had handlers for, including Notes. Or you could go to the effort of writing your own protocol handler, or business data connection. But if you couldn't find a way to index it yourself, there was no way to connect to the data. Q: So how does federation change the game? A: Instead of indexing the content, you're leveraging an external search engine that already exists. That engine returns results back in an XML format we can render. Q: I was fascinated to learn you're using the OpenSearch mechanisms and formats to accomplish this. I did an early implemention for Amazon A9, and it was trivial since I already had an RSS feed coming out of the search engine I wanted to integrate. Is that still how it works? A: Yes. Any search engine that emits an RSS feed, you can connect to. It takes about 5 minutes to set it up. You take the query URL, put in into a federated location definition (FLD) file), and away you go. Q: I guess the part of OpenSearch people will be most familiar with is the description that drives the search drop-downs in browsers. It's a little package of XML that defines the template for the query. You must be using that in Search Server as well, when it acts as a client to federated sources. A: Yes, exactly. SharePoint is behaving as a client, just as IE is. When you create a federated location definition, you're creating one of these OpenSearch description files. But, we add some schema changes for the triggers that SharePoint uses to know when to send queries to that location. And we add the XSL used to render the results. So we extend the OpenSearch schema to make it more useful to SharePoint. Q: When you start shipping queries over the net to multiple federated sources, you start running into issues of sequencing and latency. How do you deal with that? A: You add federated locations as web parts. And you can choose whether to load them synchronously or asynchrously. Ev[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/searchserver/searchserver.wma




Ray Ozzie introduces Live Mesh

Wed, 23 Apr 2008 09:02:00 GMT

Introducing Live Mesh In this audio version of a Channel 9 video, Ray Ozzie discusses his role as Microsoft's chief software architect, and the role of Live Mesh as one aspect of an emerging Internet-oriented platform. Ray Ozzie is Microsoft's chief software architect. Links mesh.com Video of this interview on Channel 9 Abolade Gbadegesin on the architecture of Live Mesh Demo of Live Mesh on Channel 10 Mike Zintel on Live Mesh as a platform Background on FeedSync JU: Hello Ray! Thanks for joining us. RO: It is great to be here Jon. JU: So, it's been about 3 years since you joined Microsoft, initially as CTO. People tend to wonder what it's like coming from a company of 300 to a company on the scale of Microsoft. RO: I've had the luxury of career working for small companies: Software Arts in the early days, and a couple of startups in Iris and Groove. But Lotus ended up being acquired by IBM, so I was at one big company before coming to Microsoft. It's tremendous in terms of the potential impact that someone can have. I think everyone at Microsoft tends to be here because you want to have a tremendous impact, and certainly that was a tremendous draw. What I really do enjoy about the role as CSA, is being at the juncture of business strategy, product and market strategy, and technical strategy. I have the opportunity to work with not only the executive team on larger strategic issues, but also with the product teams at fairly detailed technical architectural levels. As an engineer, it is really fascinating, and I've met a lot of great people. JU: People also wonder what it's like to step into a role formerly occupied by Bill Gates. What kind of continuity will there be, and how might you want to reshape the role? RO: Bill is a very unique individual. There will never be another Bill. He has got a tremendous palette of talents, both technical as he applied at Microsoft, and non-technical in the role he's moving into. In shaping the role after July, when he won't be here full time anymore, he really split the role into two pieces. Craig Mundie takes over long-term issues, research and things like that. And I have taken over most of the technical and product strategy related to products that'll ship within a couple years. My background is different than Bill's was. I've been a lot more hands-on in the product design for a number of years. I'm dealing with broader issues than I've dealt with in the past, but my background in product development gives me a lot of grounding in terms of working with a development team. And I think Craig and I make a good pairing in terms of filling his shoes. JU: How do you balance the need to span a vast spectrum of activities and the need to go deep on things? RO: Time management -- attention management -- is really the biggest challenge. The pace is fairly brutal. At the beginning of the year, I'll kind of plan out how much of my time in hours I want to spend in different categories of things. There's some allocation for the rhythm of the business and high level strategic things. There are allocations in terms of time I want to spend with product groups. And then there's a fraction I didn't initially realize I had to be as intentional about, but sometimes you have to create white space because, like a task scheduler that has too many ready tasks, you can thrash if you spend all day dealing in a reactive mode to the incoming issues, the incoming communications. Sometimes you have to create some white space in order to think and understand what is going on in the environment. I can do that by [...]


Media Files:
http://video.ch9.ms/llnwd/ch9/0/RayOzzieLiveMesh_ch9.wma




Word for scientific publishing

Thu, 17 Apr 2008 15:30:00 GMT

Pablo Fernicola is a group manager at Microsoft.  He runs a project focused on delivering tools and services for scientific and technical publishing, with a particular interest on the  transition from print to electronic and web based content, and its implications for collaboration, search, and content discovery in the future. In this interview, Pablo explains how a new add-in for Word, now available as a technical preview, helps authors and publishers of scientific articles work more effectively with one another, and with online archives like PubMed Central. Pablo Fernicola Links Technical Computing @ Microsoft - Scholarly Publishing Download details for the Article Authoring Add-in Pablo Fernicola's blog: ex Scientia JU: Hi Pablo, thanks for joining us to talk about a new Word add-in for authors of scientific journal articles. It's an interesting story about applying the XML capabilities of Office, and also about the evolution of journal publishing. How did this project get started? PF: It's an incubation project. Three people had an idea: Jean Paoli, an XML pioneer, Jim Gray... JU: Oh really? I didn't know he had been involved. PF: Yes, he and Jean really pushed to get this started, and they both recruited me for this project. It's been a little over a year since Jim disappeared, and that was a big blow, considering his key role. And third key person is Tony Hey. JU: We should explain that Tony runs what's called the technical computing initiative, and is very involved in figuring out how Microsoft can help various people in the scientific community address computing and information management challenges. PF: Right. Scientific authors in many disciplines use Word to write articles. We looked into how to simplify the workflow, streamline the process, and lower the cost. And not just for the authors, but also for the journal publishers. JU: It's been true for a long time in publishing, and not just scientific publishing, that there have been real challenges getting that Word content converted into the kinds of long-term formats we need: XML that's richly decorated with metadata. Publishers have tended to use strategies that involve giving people templates that try to use styles to control what's in the document. But since Word 2003, and especially since Word 2007, there have been a set of XML capabilities which have made possible a much more robust approach. PF: That's right. Before Word 2003, styles were the best you could do. And people got quite far by relying on them. But they were very fragile. When you copied and pasted, styles would bleed across. It was hard to disentangle that when you converted the file. JU: That's part of the problem. And part of is that, along with the content itself, there's a process involving the metadata, and that process is divided between the author and the journal publisher. It's a shared responsibility, and you need an information management system that embraces that division of labor. PF: Also: What kind of user interface do you present to these different groups? There are really three groups. First the authors, who are subject-matter experts but don't know anything about the publishing process, and shouldn't have to know. Second, the journal editors. They're also subject-matter experts, but they also know about the structure of the journal, and about the metadata they need to apply And third, you have companies and vendors who do backend tools and services, as well as the folks who work on the electronic archives. With the move from print to electron[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/word-science-publishing/fernicola.wma




Pablo Fernicola demonstrates the Word add-in for scientific authors

Thu, 17 Apr 2008 15:29:00 GMT












In this screencast, Pablo Fernicola demonstrates the technical preview of a new scientific publishing add-in for Word. The add-in enables reading and writing of XML-based documents in the archival format used by the National Library of Medicine.
(image)


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/word-science-publishing/fernicola.wmv




Making sense of C02 data: A Microsoft/Berkeley collaboration

Thu, 03 Apr 2008 15:26:00 GMT

In this podcast, MSR researcher Catharine van Ingen and Berkeley micrometeorologist Dennis Baldocchi talk with Jon Udell about their collaboration on www.fluxdata.org, a SharePoint portal to a scientific data server. The server contains carbon-dioxide flux data gathered from a worldwide network of sensors, and provides SQL Server data cubes that help scientists collaboratively make sense of the data. Dennis Baldocchi is a professor of biometeorology at Berkeley. His research focuses on the physical, biological, and chemical processes that control trace gas and energy exchange between vegetation and the atmosphere. He also studies the micrometeorology of plant canopies. Catharine van Ingen, a partner architect with with Microsoft Research in San Francisco, does e-science research exploring how database technologies can help change collaborative research in the earth sciences. She collaborates with carbon climate researchers and hydrologists. Links Fluxnet website MSR news article JU: Dennis, you're someone who's pulling together a worldwide network of CO2 monitoring stations. Can you briefly explain how these devices work? DB: Sure. Let me give you a bit of history. Back in the late 1950s, David Keeling made some of the first measurements of carbon dioxide concentration -- on Mauna Loa in Hawaii, in the Arctic, very remote locations. They saw an increase in the C02 concentration in winter, and a decrease in summer. The increase is due to respiration in the biosphere, the decrease is due to photosynthesis. And on top of this they saw a trend due to fossil fuel combustion and logging of tropical forests. These measurements were just C02 concentrations. As atmospheric scientists, we know that changes in the atmospheric concentration are due to fluxes. We measure actual fluxes: moles of carbon dioxide, per meter squared, per second, between the atmosphere and the biosphere. We do it with a combination of sensors. One is a three-dimensional sonic anemometer, which measures up-and-down and lateral-and-longitudinal motions of the air, ten times a second. And simultaneously with new sensors we measure instantaneous change in CO2 concentration. JU: So it's a combination of sensing wind speed and sensing atmospheric gas. DB: Absolutely. We measure a covariance between the two, and theoretically that's related to the flux density. JU: And this population of sensors has been growing for 15 or more years? DB: Yeah, my old lab in Oak Ridge, Tennessee made some of the first sensors we were using in the early 90s. Around then a company called Licor started making a sensor that's about 15 centimeters long and shoots an infrared beam from source to detector. The air can blow through this sensor, and it's low power, doesn't need pumps, so it can be deployed in the middle of nowhere. Many of us run with solar power, so we have a PC that pulls an amp, then the sensor pulls another amp, so for two amps we can run a flux system. JU: As Catharine points out, there's a long tradition of large-scale collaboration in some scientific disciplines, but it's relatively new in other areas, and it sounds like this is one of those. DB: Yeah. I was a grad student in the 80s and I remember my professor having a desk full of data. People would knock on the the door wanting to borrow it, and there was always some reluctance, it was really a single-investigator culture at the time. In many ways I credit our Italian colleagues, they were really gregarious and [...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/fluxnet/fluxnet.wma




Cluster computing for the classroom

Thu, 27 Mar 2008 11:35:00 GMT

Kyril Faenov is the General Manager of the Windows HPC product unit. Before founding the HPC team in 2004, Kyril worked on a broad set of projects across Microsoft, including running the planning process for Windows Server 2008, co-founding a distributed systems project in the office of the CTO, and developing scale-out technology in Windows 2000. Kyril joined Microsoft in 1998 as the result of acquisition of Valence Research, an Internet server clustering startup he co-founded and grew to profitability by securing MSN, Microsoft.com and some of the world's other largest web sites as its clients. Rich Ciapala is a program manager in Microsoft HPC++ Labs, an incubation team within the Windows HPC Server product unit. Rich joined Microsoft in 1992 and has held a number of different positions in technical sales, Microsoft Consulting Services, the Windows Customer Advisory team and the Visual Studio product team. Links Microsoft HPC++ CompFin Lab Kyril Faenov and Rich Ciapala discuss a new HPC++ Labs project that enables students to run computation-intensive experiments involving large amounts of financial data. JU: What Rich just demoed, which we'll show in a screencast, is how a financial model can be deployed to a server that acts as a front-end to a compute cluster. It's a nice easy way for students to use a model developed by a professor, select a basket of securities, run a very intensive computation on them against large chunks of data, and get answers back in an Excel spreadsheet. The bottom line is that the students can run an experiment using a level of computing power that was never before so easily accessible. KF: Yeah, because of the complexity involved in deploying systems like that, acquiring the data, and curating it, a lot of universities don't have this kind of infrastructure in place. So for a number of students who haven't done this before, this will make it available for the first time. For others who have, it will make it quite a bit easier. JU: Now these are not computer science students who are learning about high performance computing, and about writing programs for parallel machines, these are students who are learning about financial modeling, and this just makes a tool available to them that can accelerate that. KF: Precisely. Most of our HPC customers are scientists, or engineers, or business analysts, not computer scientists. They're folks who use mathematics, statistics, differential equations ... sometimes not even math directly, but applications that encode these mathematical models to do research, or engineering, or risk modeling, or decision making. To them it's just a tool, and they want to use it in the way they use PCs today, as transparently and straightforwardly as possible. JU: What's the situation today for most people? In the case of the covariance model Rich showed in the demo, if it weren't being done like that, how would it be done?a KF: You can do it in Excel, or MATLAB, or SAS, on the workstation. So you'd acquire the data, and use your preferred tool ... JU: ... and wait a long time ... KF: ... and wait a long time. And if you want to do a significant amount of data -- like a year's worth, for a large number of stocks -- it might not even be possible at all. Or you might load it up into a server, but then you have to figure out how to write an application, how to deploy it out to the server, then figure out how to submit the data to the model,[...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/hpclabs/hpc.wma




A demonstration of cluster computing for the classroom

Thu, 27 Mar 2008 11:33:00 GMT










In this screencast, Rich Ciapala demonstrates Microsoft HPC++ CompFin Lab, which integrates Microsoft HPC Server, a central market data database, and Microsoft productivity products to provide university courses with an online service to publish, execute and manage computational finance models.

(image)


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/hpclabs/hpc.wmv




Understanding CardSpace

Thu, 20 Mar 2008 01:00:00 GMT

In this podcast, Jon Udell chats with Vittorio Bertocci, author of Understanding Windows CardSpace. The discussion traces the evolution of the identity metasystem, explores the rationale for CardSpace, and considers the unsolved problem of public online identity for individuals. Vittorio Bertocci is a senior technical evangelist for Microsoft Corporation. He works with Fortune 100 and major G100 enterprises worldwide, helping them to stay ahead of the curve and take advantage of the latest technologies. He is the primary author of Understanding Windows CardSpace: An introduction to the concepts and challenges of digital identities. Links Vibro.NET: Vittorio Bertocci's blog Understanding Windows CardSpace JU: What I particularly liked about this book is the lengthy introduction that sets the context, not just for CardSpace but for previous iterations -- what problems did they solve, what problems did they not solve, and why does that lead us to the architecture we have now. For example, you discuss SSL client certificates. I remember thinking, in 1996 or so, when that capability was present in both Netscape and IE, here we go. No more passwords. Obviously that didn't happen. But why not? VB: The SSL client strategy, from a cryptographic perspective, is perfectly sound. But it's a paradigmatic example of how technology alone cannot solve a problem that involves human interaction. The certificate is a construct that's made for computer scientists. It says that the subject is the rightful owner of a certain public key, which doesn't really resonate with my mother or my sister. JU: But it didn't have to be presented that way. It could have been presented as, here is the managed card -- in modern terminology -- that you will use when you go to the Staples website. So maybe it was just too early. Or maybe the nature of that certificate didn't lend itself to the embedding of assertions in an expressive and flexible way. VB: Yes. Certificates cannot be managed cards for two reasons. One is practical and could have been easily changed. The metaphor could have been friendlier, as you say. But the other thing is that a certificate is a primary token, your credentials rather than your identity. It is the mechanism for proving that you are the person entitled to that specific key. If the certificate is given to me instead of you, it's the same. There is nothing in it that says it's you. Your identity is instead something that is about yourself. When you use a managed card, you are leveraging a relationship that you have with somebody -- your airline, your government. It's true the certificate could be the enabling mechanism for expressing this relationship. But suppose I am a customer of Alitalia, and I have a card in my wallet that, when I show it to the right people, enables me to enjoy certain advantages that are part of my identity as a customer. But my relationship with this airline, the fact that I'm entitled to a certain right, can change. JU: Yes. So if the right is hardcoded into the certificate, that's fairly static. As opposed to the more dynamic nature of the identity metasystem, in which attributes are exchanged on the fly. VB: Exactly. The attributes that make sense in a specific context -- like if you do or don't have a certain privilege should come down dynamically. Embedding them in the certificate is dangerous. I have this conversation often with governments. They [...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/understanding-cardspace/understanding-cardspace.wma




Robotics: A new approach

Thu, 13 Mar 2008 15:26:00 GMT

In this podcast, Jon Udell invites Tandy Trower and Henrik Nielsen to explain why robotics is taking off, and how their new approach to the technology will generalize to a broad range of scenarios. JU: So you were just in Japan. What did you see and do? TT: We were at IREX, the international robotics exhibition in Tokyo. All forms of robots were there, heavily dominated by industrial robots. That was the big-ticket item. But we were in a smaller section that focused on this new market, service robots, which are moving into new areas. Industrial robots have done the dangerous, dull, and dirty jobs. Now there's a new market coming, where robots move outside the factories and into the homes. It's a dramatic change. Industrial robots are very expensive, they require special operators, they perform repetitive functions, and they're dangerous for humans to interact with. But that market is starting to flatten out. So a lot of the vendors in that area, including one of our best supporting partners, Kuko, one of the top industrial robotic arm manufacturers in the world, is looking for new markets, and very anxious to engage with us in this new service, or personal, robotics market. Bill Gates reflected this in his January article in Scientific American, where he likened the personal and service robotics world to the PC world in the 1970s. The personal computer market, in its infancy, looked kind of weird. You had the Commodore PET, which had a strange little keyboard and saved programs to cassette, you had the Apple II. The transition we see in this robotics market now is very similar to what we saw coming out of that era, and even the industrial vendors are starting to look to this new market as a place to go. JU: In this case, there's also a particular demographic driver: aging populations create the need for these personal assistants in the home. And in Japan, in particular, there's a special interest in companionable robots. TT: Yes. In Japan and in many Asian countries, there's much more interest in the social aspect of robots. It's partly cultural, they grew up with AstroBoy and the idea that robots were friendly companions. So you're right, one of the biggest motivating factors is this aging of the population. I face this myself. My father-in-law is 84, he lives on his own, he needs help from his family to be able to live independently. It certainly would be helpful if we had more technology that would allow us to stay in touch with him, remind him to take his medications, connect him better with his health care providers, these are all things that robots could perform. It's also the case that in the Asian countries, because of family and cultural traditions, it's more important to take care of your elders. JU: So if the analogy is to the early PC era, then you're providing what is, in a sense, DOS. TT: Exactly. HN: I actually think robotics can grow far beyond where the PC started out. Because the PC, until very recently, had a fairly uniform form factor. You could rely on a screen and a keyboard and a mouse, and that dictated what the user interface can be. As soon as you start having what I call more context-aware applications, things that know where you are, what you are doing, what the surroundings are doing -- and not just the local environment -- this causes the computation and the applications to be completely different. They are inherently part of the environment. They have [...]


Media Files:
http://video.ch9.ms/llnwd/on10/perspectives/robotics/robotics.wma




Embedding a Popfly widget in Facebook

Mon, 08 Oct 2007 21:00:54 GMT

In this ten-minute screencast (Silverlight, Flash), Adam Nathan shows you how to personalize your Facebook page with a Popfly-based photo viewer.

(image)



Andreas Ulbrich demonstrates the Microsoft Visual Programming Language

Mon, 06 Aug 2007 15:30:00 GMT

In an earlier screencast, Henrik Nielsen illustrates how the Microsoft Robotics Studio, building on top of the CCR (Concurrency and Coordination Runtime) and DSS (Decentralized Software Services) technologies, exposes a RESTful service-oriented architecture.

In this companion screencast, Andreas Ulbrich demonstrates VPL (Visual Programming Language), a diagram-oriented dataflow language. Although it was created for the Robotics Studio, it is -- as you'll see here -- a very general way to visualize, orchestrate, and debug message-driven services that run work in parallel.

(image)


Media Files:
http://video.ch9.ms/llnwd/ch9/6/2/8/6/5/2/vpl.wmv