Subscribe: Phil Windley's Technometria
http://www.windley.com/rss.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
engine  identity  key  learning  mdash  new  pico engine  pico  programming  sovrin  student profile  student  system  trust 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Phil Windley's Technometria

Phil Windley's Technometria



Building the Internet of My Things



Last Build Date: Wed, 24 May 2017 21:42:00 -0600

Copyright: Copyright 2017
 



Sovrin Web of Trust

Wed, 24 May 2017 21:39:45 -0600

Summary: Sovrin uses a heterarchical, decentralized Web of Trust model to build trust in identifiers and give people clues about what and who to trust. The Web of Trust model for Sovrin is still being developed, but differs markedly from the trust model used by the Web. The Web (specifically TLS/SSL) depends on a hierarchical certificate authority model called the Public Key Infrastructure (PKI) to determine which certificates can be trusted. When your browser determines that the domain name of the site you're on is associated with the public key being used to encrypt HTTP transmissions (and maybe that they’re controlled by a specific organization), it uses a certificate it downloads from the Website itself. How then can this certificate be trusted? Because it was cryptographically signed by some other organization who issued the public key and presumably checked the credentials of the company buying the certificate for the domain. But that raises the question "how can we trust the organization that issued the certificate?" By checking its certificate, and so on up the chain until we get to the root (this process is called “chain path validation”). The root certificates are distributed with browsers and we trust the browser manufacturers to only include trustworthy root certificates. These root certificates are called “trust anchors” because their trustworthiness is assumed, rather than derived. Webs of Trust In contrast, a Web of Trust model uses webs of interlocking certificates, signed by multiple parties to establish the veracity of a certificate (see Web of Trust on Wikipedia for more detail). An entity can be part of many overlapping webs of trust because its certificate can be signed by many parties. Sovrin uses a heterarchical, decentralized Web of Trust model, not the hierarchical PKI model, to establish the trustworthiness of certificates. The methodology for doing this is built into the Sovrin system, not layered on top of it, by combining a decentralized ledger, decentralized identifiers, and verifiable claims. While Sovrin Foundation will establish some trust anchors1 (usually the same Stewards who operate validator nodes on the Sovrin network), these are merely for bootstrapping the system and aren’t necessary for the trusting the system once it is up and running. Sovrin uses this trust system to protect itself from DDoS attacks, among other things. By allowing an identifier to be part of multiple webs of trust, Sovrin allows for reputational context to be taken into account. For example, being a good plumber doesn’t guarantee that a person will be a good babysitter, but a person who was a good plumber and a trustworthy babysitter could be part of different webs of trust that take these two contexts into account. The makes the overall trust model much more flexible and adaptable to the circumstances within which it will be used. PKI is good for one thing on the Web: showing the public key used to secure HTTP transmissions is correct. In contrast, Sovrin’s decentralized web of trust model is good for anything people need. The goal of Sovrin is to provide the infrastructure upon which these overlapping webs of trust can be built for various applications. Lyft, Airbnb, and countless other sharing economy businesses are essentially specialized trust frameworks. Sovrin provides the means of creating similar trust frameworks without the need to build the trust infrastructure over and over. Building Trust Trust anchors are not the only way one could build trust in an identifier for a given purpose. I can think of the following ways that an identifier could come to be trusted: Personal knowledge — The identifier is personally known to you through interactions outside Sovrin. People close to me would be in this category. This category also includes identifiers like those for my employer or other entities who I interact with in the physical world and can thus establish a trust connection through means outside of Sovrin. If I k[...]



Sovrin In-Depth Technical Review

Mon, 22 May 2017 11:02:17 -0600

Summary: Sovrin Foundation has engaged Engage Identity to perform a security review of Sovrin's technology and processes. Results will be available later this summer.

(image)

The Sovrin Foundation and Engage Identity announced a new partnership today. Security experts from Engage Identity will be completing an in-depth technical review of the Sovrin Foundation’s entire security architecture.

Sovrin Foundation is very concerned that the advanced technology utilized by everyone depending on Sovrin is secure. That technology protects many valuable assets including private personal information and essential business data. As a result, we wanted to be fully aware of the risks and vulnerabilities in Sovrin. In addition, The Sovrin Foundation will benefit from having a roadmap for future security investment opportunities.

We're very happy to be working with Engage Identity, a leader in the security and identity industry. Established and emerging cryptographic identity protocols are one of their many areas of expertise. They have extensive experience providing security analysis and recommendations for identity frameworks.

The Engage Identity team is lead by Sarah Squire, who has worked on user-centric open standards for many organizations including NIST, Yubico, and the OpenID Foundation. Sarah will be joined by Adam Migus and Alan Viars, both experienced authorities in the fields of identity and security.

The final report will be released this summer, and will include a review of the current security architecture, as well as opportunities for future investment. We intende to make the results public. Anticipated subjects of in-depth research are:

  • Resilience to denial of service attacks
  • Key management
  • Potential impacts of a Sovrin-governed namespace
  • Minimum technical requirements for framework participants
  • Ongoing risk management processes

Sovrin Foundation is excited to take this important step forward with Engage Identity to ensure that the future of self-sovereign identity management can thrive and grow.

Tags:




Life-Like Anonymity

Sat, 13 May 2017 10:08:26 -0600

Summary: Natural anonymity comes from our ability to recognize others without the aid of an external identity system. Online interactions can only mirror life-like anonymity when we have decentralized identity systems that don't put all unteractions under the purview of centralized administrative systems. At VRM-day prior to Internet Identity Workshop last week, Joyce Searls commented that she wants the same kind of natural anonymity in her digital life that she has in real life. In real life, we often interact with others—both people and institutions—with relative anonymity. For example, if I go the store and buy a coke with cash there is no exchange of identity information necessary. Even if I use a credit card it's rarely the case that the entire transaction happens under the administrative authority of the identity system inherent in the credit card. Only the financial part of the transaction takes place in that identity system. This is true of most interactions in real life. In contrast, in the digital world, very few meaningful transactions are done outside of some administrative identity system. There are several reasons why identity is so important in the digital world: Continuity—Because of the stateless nature of HTTP, building a working shopping cart, for example, requires using some kind of token for correlation of independent HTTP transactions. These tokens are popularly known as cookies. While they can be pseudonymous, they are often correlated across multiple independent sessions using a authenticated identifier. This allows, for example, the customer to have a shopping cart that persists across time on different devices. Convenience—So long as the customer is authenticating, we might as well further store additional information like addresses and credit card numbers for their convenience, to extend the shopping example. Storing these allows the customer to complete transactions without having to enter the same information over and over. Trust—There are some actions that should only be taken by certain people, or people in certain roles, or with specific attributes. Once a shopping site has stored my credit card, for example, I ought to be the only one who can use it. Identity systems provide authentication mechanisms as the means of knowing who is at the other end of the wire so that we know what actions they're allowed to take. This places identifiers in context so they can be trusted. Surveillance—Identity systems provide the means of tracking individuals across transactions for purposes of gathering data about them. This data gathering may be innocuous or nefarious, but there is no doubt that it is enabled by identity systems in use on the Internet. In real life, we do without identity systems for most things. You don't have to identify yourself to the movie theater to watch a movie or log into some system to sit in a restaurant and have a private conversation with friends. In real life, we act as embodied, independent agents. Our physical presence and the laws of physics have a lot to do with our ability to function with workable anonymity across many domains. So, how did we get surveillance and it's attendant affects on natural anonymity as an unintended, but oft-exploited feature of administrative digital identity systems? Precisely because they are administrative. Legibility Legibility is a term used to describe how administrative systems make things governable by simplifying, inventorying, and rationalizing things around them. Venkatesh Rao nicely summarized James C. Scott's seminal book on legibility and its unintended consequences: Seeing Like a State. Identity systems make people legible in order to offer continuity, convenience, and trust. But that legibility also allows surveillance. In some respect, this is the trade off we always get with administrative systems. By creating legibility, they threaten privacy. Administrative [...]



Hyperledger Welcomes Project Indy

Tue, 02 May 2017 08:14:26 -0600

Summary: The Sovrin Foundation announced at the 24th Internet Identity Workshop (IIW) that its distributed ledger, custom-built for independent digital identity has been accepted into incubation under Hyperledger, the open source collaborative effort created to advance cross-industry blockchain technologies hosted by The Linux Foundation. We’re excited to announce Indy, a new Hyperledger project for supporting independent identity on distributed ledgers. Indy provides tools, libraries, and reusable components for providing digital identities rooted on blockchains or other distributed ledgers so that they are interoperable across administrative domains, applications, and any other silo. Why Indy? Internet identity is broken. There are too many anti-patterns and too many privacy breaches. Too many legitimate business cases are poorly served by current solutions. Many have proposed distributed ledger technology as a solution, however building decentralized identity on top of distributed ledgers that were designed to support something else (cryptocurrency or smart contracts, for example) leads to compromises and short-cuts. Indy provides Hyperledger projects and other distributed ledger systems with a first-class decentralized identity system. Indy’s Features The most important feature of a decentralized identity system is trust. As I wrote in A Universal Trust Framework, Indy provides accessible provenance for trust to support user-controlled exchange of verifiable claims about an identifier. Indy also has a rock-solid revocation model for cases where those claims are no longer true. Verifiable claims are a key component of Indy’s ability to serve as a universal platform for exchanging trustworthy claims about transactions. Provenance is the foundation of accountability through recourse. Another vital feature of decentralized identity&mdaqsh;especially for a public ledger—is privacy. Privacy by Design is baked deep into Indy architecture as reflected by three fundamental features: First, identifiers on Indy are pairwise unique and pseudonymous by default to prevent correlation. Indy is the first DLT to be designed around Decentralized Identifiers (DIDs) as the primary keys on the ledger. DIDs are a new type of digital identifier that were invented to enable long-term digital identities that don’t require centralized registry services. DIDs on the ledger point to DID Descriptor Objects (DDOs), signed JSON objects that can contain public keys and service endpoints for a given identifier. DIDs are a critical component of Indy’s pairwise identifier architecture. Second, personal data is never written to the ledger. Rather all private data is exchanged over peer-to-peer encrypted connections between off-ledger agents. The ledger is only used for anchoring rather than publishing encrypted data. Third, Indy has built-in support for zero-knowledge proofs (ZKP) to avoid unnecessary disclosure of identity attributes—privacy preserving technology that has been long pursued by IBM Research (Idemix) and Microsoft (UProve), but which a public ledger for decentralized identity now makes possible at scale. Indy is all about giving identity owners independent control of their personal data and relationships. Indy is built so that the owner of the identity is structurally part of transactions made about that identity. Pairwise identifiers stop third parties from talking behind the identity owner’s back, since the identity owner is the only place pairwise identifiers can be correlated. Sovereign Identity (click to enlarge) Indy is based on open standards so that it can interoperate with other distributed ledgers. These start, of course, with public-key cryptography standards. Other important standards cover things like the format of Decentralized Identifiers, what they point to, and how agents exchange verifiable claims. Indy also supports a system of attrib[...]



Pico Programming Lesson: Modules and External APIs

Tue, 18 Apr 2017 15:25:00 -0600

Summary: A new pico lesson is available that shows how to use user-defined actions in modules to wrap an API.

(image)

I recently added a new lesson to the Pico Programming Lessons on Modules and External APIs. KRL (the pico programming language) has parameterized modules that are great for incorporating external APIs into a pico-based system.

This lesson shows how to define actions that wrap API requests, put them in a module that can be used from other rulesets, and manage the API keys. The example (code here) uses the Twilio API to create a send_sms() action. Of course, you can also use functions to wrap API requests where appropriate (see the Actions and Functions section of the lesson for more detail on this).

KRL includes a key pragma in the meta block for declaring keys. The recommended way to use it is to create a module just to hold keys. This has several advantages:

  • The API module (Twilio in this case) can be distributed and used without worrying about key exposure.
  • The API module can be used with different keys depending on who is using it and for what.
  • The keys module can be customized for a given purpose. A given use will likely include keys for multiple modules being used in a given system.
  • The pico engine can manage keys internally so the programmer doesn't have to worry (as much) about key security.
  • The key module can be loaded from a file or password-protected URL to avoid key loss.

The combination of built-in key management and parameterized modules is a powerful abstraction that makes it easy to build easy-to-use KRL SDKs for APIs.

Going Further

The pico lessons have all been recently updated to use the new pico engine. If you're interested in learning about reactive programming and the actor model with picos, walk through the Quickstart and then dive into the lessons.


Photo Credit: Blue Marble Geometry from Eric Hartwell (CC BY-NC-SA 3.0)

Tags:




The New Pico Engine Is Ready for Use

Fri, 24 Mar 2017 13:22:39 -0600

Summary: The new pico engine is nearly feature complete and being used in a wide variety of settings. I'm declaring it ready for use. A little over a year ago, I announced that I was starting a project to rebuild the pico engine. My goal was to improve performance, make it easier to install, and supporting small deployments while retaining the best features of picos, specifically being Internet first. Over the past year we've met that goal and I'm quite excited about where we're at. Matthew Wright and Bruce Conrad have reimplemented the pico engine in NodeJS. The new engine is easy to install and quite a bit faster than the old engine. We've already got most of the important features of picos. My students have redone large portions of our supporting code to run on the new engine. As a result, the new engine is sufficiently advanced that I'm declaring it ready for use. We've updated the Quickstart and Pico Programming Lessons to use the new engine. I'm also adding new lessons to help programmers understand the most important features of Picos and KRL. My Large-Scale Distributed Systems class (CS462) is using the new pico engine for their reactive programming assignments this semester. I've got 40 students going through the pico programming lessons as well as reactive programming labs from the course. The new engine is holding up well. I'm planning to expand it's use in the course this spring. Adam Burdett has redone the closet demo we showed at OpenWest last summer using the new engine running on a Raspberry Pi. One of the things I didn't like about using the classic pico engine in this scenario was that it made the solution overly reliant on a cloud-based system (the pico engine) and consequently was not robust under network partitioning. If the goal is to keep my machines cool, I don't want them overheating because my network was down. Now the closet controller can run locally with minimal reliance on the wider Internet. Bruce was able to use the new engine on a proof of concept for BYU's priority registration. This was a demonstration of the ability for the engine to scale and handle thousands of picos. The engine running on a laptop was able to handle 44,504 add/drop events in over 8000 separate picos in 35 minutes and 19 seconds. The throughput was 21 registration events per second or 47.6 milliseconds per request. We've had several lunch and learn sessions with developers inside and outside BYU to introduce the new engine and get feedback. I'm quite pleased with the reception and interactions we've had. I'm looking to expand those now that the lessons are completed and we have had several dozen people work them. If you're interested in attending one, let me know. Up Next I've hired two new students, Joshua Olson and Connor Grimm, to join Adam Burdett and Nick Angell in my lab. We're planning to spend the summer getting Manifold, our pico-based Internet of Things platform, running on the new engine. This will provide a good opportunity to improve the new pico engine and give us a good IoT system for future experiments, supporting our idea around social things. I'm also contemplating a course on reactive programming with picos on Udemy or something like it. This would be much more focused on practical aspects of reactive programming than my BYU distributed system class. Picos are a great way to do reactive programming because they implement an actor model. That's one reason they work so well for the Internet of Things. Going Further If you'd like to explore the pico engine and reactive programming with picos, you can start with the Quickstart and move on to the pico programming lessons. We'd also love help with the open source implementation of the pico engine. The code is on Github and there's well-maintained set of issues that need to be worked. Bruce is the coordinator of these efforts. An[...]



What Use is a Master of Science in CS?

Thu, 23 Feb 2017 08:56:13 -0700

Summary: Don't botch a hiring situation because you don't understand what your candidate's credentials mean. An advanced degree in Computer Science has little to do with programming, but says a lot about the candidate's ability to solve difficult, open-ended problems. Recently a friend of mine, Eric Wadsworth, remarked in a Facebook post: My perspective, in my field (admittedly limited to the tech industry) has shifted. I regularly interview candidates who are applying for engineering positions at my company. Some of them have advanced degrees, and to some of those I give favorable marks. But the degree is not really a big deal, doesn't make much difference. The real question is, "Can this person do the work?" Having more years of training doesn't really seem to help. Tech moves so fast, maybe it is already stale when they graduate. Time would be better spent getting actual experience building real software systems. I read this, and the comments that followed, with a degree of interest because most of them reflect a gross misunderstanding of what an advanced degree indicates. The assumption appears to be that people who get a BS in Computer Science are learning to program and therefore getting a MS means you're learning more about how to program. I understand why this can be confusing. We don't often hire a plumber when we need a mechanical engineer, but Computer Science and programming are still relatively young and we're still working out exactly what the differences are. The truth is that CS programs are not really designed to teach people to code, except as a necessary means to learning computer science, which is not merely programming. That's doubly true of a masters degree. There are no courses in a master's program that are specifically designed to teach anyone to program anything. You can learn to code at a 1000 web sites. A CS degree includes topics like computational theory and practices, algorithms, database design, operating system design, networking, security, and many others. All presented in a way designed to create well-rounded professionals. The ACM Curriculum Guidelines (PDF) are a good place to see some of the detail in a program independent way. Most of what one learns in a Computer Science program has a long shelf life—by design. For example, I design the modules in my Large Scale Distributed Programming class to teach principles that have been important for 30 years and are likely to be important for 30 more. Preventing Byzantine failure, for example, has recently become the latest fad with the emergence of distrubted ledgers. I learned about it in 1988. If your interview questions are asking people what they know about the latest JavaScript framework, you're unlikely to distinguish the person with a CS degree from someone who just completed a coding bootcamp. What does one learn getting an advanced degree in Computer Science? People who've completed a masters degree have proven their ability to find the answers to large, complex, open-ended problems. This effort usually lasts at least six months and is largely self-directed. They have shown that they can explore scientific literature to find answers and synthesize that information into a unique solution. This is very different than, say, looking at Stack Overflow to find the right configuration parameters for a new framework. Often, their work is only part of some larger problem being explored by a team of fellow students under the direction of someone recognized as an expert in a specific field in Computer Science. If these kinds of skills aren't important to your project, then you're wasting your time and money hiring someone with an advanced degree. As Eric points out, the holder of an MS won't necessarily be a better programmer, especially as measured by your interview questions and tests. And if they are important, you're u[...]



Verifying Constituency: A Sovrin Use Case

Thu, 16 Feb 2017 15:52:41 -0700

Summary: Verified constituency is a simple, but powerful means of creating a collective voice that is difficult to ignore or dismiss. Recently, my representative, held a town hall that didn't go so well. Rep. Chaffetz claims that "the protest crowd included people brought in from outside his district specifically to be disruptive." I'm not here to debate the veracity of that claim, but to make a proposal. First, let's recognize that members of Congress are more apt to listen when they know they are hearing from constituents. Second, this problem is exacerbated online. They wonder, "Are all the angry tweets coming from voters in my district?" and likely conclude they're not. Britt Blaser's been trying to solve this problem for a while. Suppose that I had four verified claims in my Sovrin agent: Address Claim—A claim that I live at a certain address, issued by someone that we can trust to not lie about this (e.g. my bank, utility company, or a third party address verification service). Constituency Claim—A claim written by the NewGov Foundation or some other trusted third party, based on the Address Claim, that I'm a constituent of Congressional District 3. Voter Claim—A claim that says I'm a registered voter. Ideally this would be written by the State of Utah Election Office, but might need to be done by someone like NewGov based on voter rolls for now. Twitter Claim—A claim that proves I own a particular Twitter handle. Again, this would ideally be written by Twitter, but could be the work of a third party for now.1 Given these claims, Sovrin can be used to create a proof that @windley belongs to a verified voted in Congressional District 3. More generally, the proof shows a given social media account belongs to a constituent who lives in a specific political jurisdiction. Anyone would be able to validate that proof and check the claims that it is based on. This proof doesn't need to disclose anything beyond the Twitter handle and congressional district. No other personally identifying information need be disclosed by the proof. How would we use this proof? Imagine, for example, a Website that publishes all tweets that contain the hashtag #UTCD3, but only if they are from twitter handles that are certified to have come from people who live in Congressional District 3. A more ambitious use case would merge these verification with the NewGov GEOvoter API to place the tweets on interactive maps to show where hotspots are. Combined with sentiment analysis, the constituency proof could be used to show political sentiment across the country, across the state, or within the local water district. Sovrin provides a trusted infrastructure for issuing and using the verified claims. NewGov or someone else would provide the reason for trusting the claims written about verified voters. Eventually these claims should written by the Elections Office or Twitter directly providing even more trust in the system. Photo Credit: Jason Chaffetz Town Hall Meeting in American Fork, Utah on August 10, 2011 from Michael Jolley (CC BY 2.0). This post originated in a conversation I had with Britt Blaser. Notes: Twitter is simply one example. We could have claims for Facebook Instagram, other social media, email, or any online tool. Tags: sovrin sovrin+use+cases voting twitter verified+claims [...]



Student Profiles: A Proof of Concept

Sun, 12 Feb 2017 12:46:00 -0700

Summary: This post describes a proof of concept for a personal learning system called a student profile. The student profile gives students control over their personal information, including learning activities, and demonstrates how other parties can trust learning records kept in the student profile and shared by the student. This is a critical factor in creating personal learning environments that support life-long learning and give the university greater flexibility in system architecture. In Sovrin Use Cases: Education, I broadly outlined how a decentralized identity ledger, Sovrin, could provide the tools necessary to build a decentralized university. This post takes the next step by laying out a three phase project to provide a proof of concept. Background Teaching students to be life-long learners in a digital age includes giving them tools and techniques they can use after they leave the university. We do students a disservice when we supply them with only enterprise-level tools that they lose access to once they've graduated. Learning management systems are fine, but they're not a personal tool that supports life-long learning. BYU has been exploring personal leaning environments and operates a thriving Domain of One's Own program in support of this ideal. Architected correctly, personal learning environments provide additional, important benefits. For example, we're exploring the use of decentralized personal student profiles to create a virtual university using programs, certifications, and courses from several different institutions. A Proof of Concept In Sovrin Use Cases: Education, I wrote: The idea is to create a decentralized system of student profiles that contain both profile data as well as learning records. These learning records can attest to any kind of learning activity from the student having read a few paragraphs to the completion of her degree. By making the student the point of integration, we avoid having to do heavy, expensive point-to-point integrations between all the student information systems participating in the educational initiative. The architecture relies on being able to write verifiable claims about learning activities and other student attributes. A verifiable claim allows the student to be the source of information about themselves that other parties can trust. Without this property, the decentralized approach doesn't work. I describe the details here: A Universal Trust Framework. The proof of concept has three phases that are described below. When finished, we will have a prototype system that demonstrates all the technology and interactions required to put this architecture into use. The things we learn in the proof of concept will guide us as we roll the architecture out globally. Phase 0: A Learning Record Store The goal of Phase 0 is to build a basic student profile that has an API manager, learning record store, and some system for storing basic profile information. Basic Student Profile (click to enlarge) The system produced in Phase 0 is foundational. The student profile and learning record store (I'll use "student profile" to inclusively talk about the entire system from now on) provide the repository of student data. The student profile provides an API that supports event-based notification (mostly through the xAPI). The requirements for the system built in Phase 0 include the following: API Manager—the student profile will include a simple API manager. Profile data—the student profile will be capable of storing and providing (through an API) basic profile data. Learning record store—the student profile will include an xAPI-compatible LRS. xAPI notifications from Canvas—The student profile should accept xAPI calls from the University's tes[...]



A Universal Trust Framework

Mon, 30 Jan 2017 17:17:05 -0700

Summary: The Internet has never had a universal trust framework before. Imagine if you could build the next sharing economy application without having to also build the platform that helps people trust. This post describes a universal trust framework that is open to all. Sovrin changes the world by providing a universal means of trusting. In We've stopped trusting institutions and started trusting strangers, Rachel Botsman talks about the "trust gap" that separates a place of certainty from something that is unknown. Some force has to help us "make the leap" from certainty to uncertainty and that force is trust. Traditionally, we've relied on local trust that is based on knowing someone—acquired knowledge or reputation. In a village, I know the participants and can rely on personal knowledge to determine who to trust. As society got larger, we began to rely on institutions to broker trust. Banks, for example, provide institutional trust by brokering transactions—we rely on the bank to transfer trust. I don't need to know you to take your credit card. But lately, as Botsman says, "we've learned that institutional trust isn't meant for the digital age." In the digital age, we have to learn to how to trust strangers. Botsman discusses sharing platforms like AirBnB and BlaBlaCar. You might argue that they're just another kind of institution. But there's a key difference: those platforms are bidirectional For example AirBnB lets guests rate their hosts, but also lets hosts rate guests. These platforms give more information about the individual in order to establish trust. But beyond platforms like AirBnB lies distributed trust based on blockchains and distributed ledgers. Botsman makes the point that distributed trust provides a system wherein you don't have to trust the individual, only the idea (e.g. distributed cash transactions) and the platform (e.g. Bitcoin). You can do this when the system itself make its difficult for the person to misrepresent themselves or their actions. When I send you Bitcoin, you don't have to trust me because the system provides provenance of the transaction and ensures that it happens correctly. I simply can't cheat. At a fundamental level, trust leads us to believe what people say. Online this is difficult because identifiers lacks the surrounding trustworthy context necessary provide the clues we need to establish trust. Dick Hardt said this back in 2007. The best way to create context around an identifier is to bind it to other information in a trustworthy way. Keybase does this, for example, to create context for a public key. Keybase creates a verifiable context for a public key by allowing its owner to cryptographically prove she is also in control of certain domain names, Twitter accounts, and Github accounts, among others. Keybase binds those proofs—the context—to the key. Potential users of the public key can validate for themselves the cryptographic proofs and decide whether or not to trust that the public key belongs to the person they wish to communicate with. Another key idea in reputation and trust is reciprocity. Accountability and a means of recourse when something goes wrong create an environment where trust can thrive. This is one of the secrets to sharing economy platforms like AirBnB. Botsman makes the point that she never leaves the towel on the floor of an AirBnB because the host "knows" her. She is accountable and there is the possibility for recourse (a bad guest rating). Trust Frameworks and Trust Transactions The phrase we use to describe the platforms of AirBnB, BlaBlaCar, and other sharing economy companies is trust framework. Simply put, a trust framework provides the structure necessary to leap between the known and unknown. For exa[...]