Subscribe: Phil Windley's Technometria
http://www.windley.com/rss.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
engine  key  learning  new  pico engine  pico  picos  profile  programming  sovrin  student profile  student  system  trust 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Phil Windley's Technometria

Phil Windley's Technometria



Building the Internet of My Things



Last Build Date: Fri, 24 Mar 2017 13:23:51 -0600

Copyright: Copyright 2017
 



The New Pico Engine Is Ready for Use

Fri, 24 Mar 2017 13:22:39 -0600

Summary: The new pico engine is nearly feature complete and being used in a wide variety of settings. I'm declaring it ready for use. A little over a year ago, I announced that I was starting a project to rebuild the pico engine. My goal was to improve performance, make it easier to install, and supporting small deployments while retaining the best features of picos, specifically being Internet first. Over the past year we've met that goal and I'm quite excited about where we're at. Matthew Wright and Bruce Conrad have reimplemented the pico engine in NodeJS. The new engine is easy to install and quite a bit faster than the old engine. We've already got most of the important features of picos. My students have redone large portions of our supporting code to run on the new engine. As a result, the new engine is sufficiently advanced that I'm declaring it ready for use. We've updated the Quickstart and Pico Programming Lessons to use the new engine. I'm also adding new lessons to help programmers understand the most important features of Picos and KRL. My Large-Scale Distributed Systems class (CS462) is using the new pico engine for their reactive programming assignments this semester. I've got 40 students going through the pico programming lessons as well as reactive programming labs from the course. The new engine is holding up well. I'm planning to expand it's use in the course this spring. Adam Burdett has redone the closet demo we showed at OpenWest last summer using the new engine running on a Raspberry Pi. One of the things I didn't like about using the classic pico engine in this scenario was that it made the solution overly reliant on a cloud-based system (the pico engine) and consequently was not robust under network partitioning. If the goal is to keep my machines cool, I don't want them overheating because my network was down. Now the closet controller can run locally with minimal reliance on the wider Internet. Bruce was able to use the new engine on a proof of concept for BYU's priority registration. This was a demonstration of the ability for the engine to scale and handle thousands of picos. The engine running on a laptop was able to handle 44,504 add/drop events in over 8000 separate picos in 35 minutes and 19 seconds. The throughput was 21 registration events per second or 47.6 milliseconds per request. We've had several lunch and learn sessions with developers inside and outside BYU to introduce the new engine and get feedback. I'm quite pleased with the reception and interactions we've had. I'm looking to expand those now that the lessons are completed and we have had several dozen people work them. If you're interested in attending one, let me know. Up Next I've hired two new students, Joshua Olson and Connor Grimm, to join Adam Burdett and Nick Angell in my lab. We're planning to spend the summer getting Manifold, our pico-based Internet of Things platform, running on the new engine. This will provide a good opportunity to improve the new pico engine and give us a good IoT system for future experiments, supporting our idea around social things. I'm also contemplating a course on reactive programming with picos on Udemy or something like it. This would be much more focused on practical aspects of reactive programming than my BYU distributed system class. Picos are a great way to do reactive programming because they implement an actor model. That's one reason they work so well for the Internet of Things. Going Further If you'd like to explore the pico engine and reactive programming with picos, you can start with the Quickstart and move on to the pico programming lessons. We'd also love help with the open source implementation of the pico engine. The code is on Github and there's well-maintained set of issues that need to be worked. Bruce is the coordinator of these efforts. Any questions on picos or using them can be directed to the Pico Labs forum and there's a pretty good set of documentation. Photo Credit[...]



What Use is a Master of Science in CS?

Thu, 23 Feb 2017 08:56:13 -0700

Summary: Don't botch a hiring situation because you don't understand what your candidate's credentials mean. An advanced degree in Computer Science has little to do with programming, but says a lot about the candidate's ability to solve difficult, open-ended problems. Recently a friend of mine, Eric Wadsworth, remarked in a Facebook post: My perspective, in my field (admittedly limited to the tech industry) has shifted. I regularly interview candidates who are applying for engineering positions at my company. Some of them have advanced degrees, and to some of those I give favorable marks. But the degree is not really a big deal, doesn't make much difference. The real question is, "Can this person do the work?" Having more years of training doesn't really seem to help. Tech moves so fast, maybe it is already stale when they graduate. Time would be better spent getting actual experience building real software systems. I read this, and the comments that followed, with a degree of interest because most of them reflect a gross misunderstanding of what an advanced degree indicates. The assumption appears to be that people who get a BS in Computer Science are learning to program and therefore getting a MS means you're learning more about how to program. I understand why this can be confusing. We don't often hire a plumber when we need a mechanical engineer, but Computer Science and programming are still relatively young and we're still working out exactly what the differences are. The truth is that CS programs are not really designed to teach people to code, except as a necessary means to learning computer science, which is not merely programming. That's doubly true of a masters degree. There are no courses in a master's program that are specifically designed to teach anyone to program anything. You can learn to code at a 1000 web sites. A CS degree includes topics like computational theory and practices, algorithms, database design, operating system design, networking, security, and many others. All presented in a way designed to create well-rounded professionals. The ACM Curriculum Guidelines (PDF) are a good place to see some of the detail in a program independent way. Most of what one learns in a Computer Science program has a long shelf life—by design. For example, I design the modules in my Large Scale Distributed Programming class to teach principles that have been important for 30 years and are likely to be important for 30 more. Preventing Byzantine failure, for example, has recently become the latest fad with the emergence of distrubted ledgers. I learned about it in 1988. If your interview questions are asking people what they know about the latest JavaScript framework, you're unlikely to distinguish the person with a CS degree from someone who just completed a coding bootcamp. What does one learn getting an advanced degree in Computer Science? People who've completed a masters degree have proven their ability to find the answers to large, complex, open-ended problems. This effort usually lasts at least six months and is largely self-directed. They have shown that they can explore scientific literature to find answers and synthesize that information into a unique solution. This is very different than, say, looking at Stack Overflow to find the right configuration parameters for a new framework. Often, their work is only part of some larger problem being explored by a team of fellow students under the direction of someone recognized as an expert in a specific field in Computer Science. If these kinds of skills aren't important to your project, then you're wasting your time and money hiring someone with an advanced degree. As Eric points out, the holder of an MS won't necessarily be a better programmer, especially as measured by your interview questions and tests. And if they are important, you're unlikely to uncover a candidate's abilities in an interview. Luckily, someone else has spent a lot of time and money certifying that the [...]



Verifying Constituency: A Sovrin Use Case

Thu, 16 Feb 2017 15:52:41 -0700

Summary: Verified constituency is a simple, but powerful means of creating a collective voice that is difficult to ignore or dismiss. Recently, my representative, held a town hall that didn't go so well. Rep. Chaffetz claims that "the protest crowd included people brought in from outside his district specifically to be disruptive." I'm not here to debate the veracity of that claim, but to make a proposal. First, let's recognize that members of Congress are more apt to listen when they know they are hearing from constituents. Second, this problem is exacerbated online. They wonder, "Are all the angry tweets coming from voters in my district?" and likely conclude they're not. Britt Blaser's been trying to solve this problem for a while. Suppose that I had four verified claims in my Sovrin agent: Address Claim—A claim that I live at a certain address, issued by someone that we can trust to not lie about this (e.g. my bank, utility company, or a third party address verification service). Constituency Claim—A claim written by the NewGov Foundation or some other trusted third party, based on the Address Claim, that I'm a constituent of Congressional District 3. Voter Claim—A claim that says I'm a registered voter. Ideally this would be written by the State of Utah Election Office, but might need to be done by someone like NewGov based on voter rolls for now. Twitter Claim—A claim that proves I own a particular Twitter handle. Again, this would ideally be written by Twitter, but could be the work of a third party for now.1 Given these claims, Sovrin can be used to create a proof that @windley belongs to a verified voted in Congressional District 3. More generally, the proof shows a given social media account belongs to a constituent who lives in a specific political jurisdiction. Anyone would be able to validate that proof and check the claims that it is based on. This proof doesn't need to disclose anything beyond the Twitter handle and congressional district. No other personally identifying information need be disclosed by the proof. How would we use this proof? Imagine, for example, a Website that publishes all tweets that contain the hashtag #UTCD3, but only if they are from twitter handles that are certified to have come from people who live in Congressional District 3. A more ambitious use case would merge these verification with the NewGov GEOvoter API to place the tweets on interactive maps to show where hotspots are. Combined with sentiment analysis, the constituency proof could be used to show political sentiment across the country, across the state, or within the local water district. Sovrin provides a trusted infrastructure for issuing and using the verified claims. NewGov or someone else would provide the reason for trusting the claims written about verified voters. Eventually these claims should written by the Elections Office or Twitter directly providing even more trust in the system. Photo Credit: Jason Chaffetz Town Hall Meeting in American Fork, Utah on August 10, 2011 from Michael Jolley (CC BY 2.0). This post originated in a conversation I had with Britt Blaser. Notes: Twitter is simply one example. We could have claims for Facebook Instagram, other social media, email, or any online tool. Tags: sovrin sovrin+use+cases voting twitter verified+claims [...]



Student Profiles: A Proof of Concept

Sun, 12 Feb 2017 12:46:00 -0700

Summary: This post describes a proof of concept for a personal learning system called a student profile. The student profile gives students control over their personal information, including learning activities, and demonstrates how other parties can trust learning records kept in the student profile and shared by the student. This is a critical factor in creating personal learning environments that support life-long learning and give the university greater flexibility in system architecture. In Sovrin Use Cases: Education, I broadly outlined how a decentralized identity ledger, Sovrin, could provide the tools necessary to build a decentralized university. This post takes the next step by laying out a three phase project to provide a proof of concept. Background Teaching students to be life-long learners in a digital age includes giving them tools and techniques they can use after they leave the university. We do students a disservice when we supply them with only enterprise-level tools that they lose access to once they've graduated. Learning management systems are fine, but they're not a personal tool that supports life-long learning. BYU has been exploring personal leaning environments and operates a thriving Domain of One's Own program in support of this ideal. Architected correctly, personal learning environments provide additional, important benefits. For example, we're exploring the use of decentralized personal student profiles to create a virtual university using programs, certifications, and courses from several different institutions. A Proof of Concept In Sovrin Use Cases: Education, I wrote: The idea is to create a decentralized system of student profiles that contain both profile data as well as learning records. These learning records can attest to any kind of learning activity from the student having read a few paragraphs to the completion of her degree. By making the student the point of integration, we avoid having to do heavy, expensive point-to-point integrations between all the student information systems participating in the educational initiative. The architecture relies on being able to write verifiable claims about learning activities and other student attributes. A verifiable claim allows the student to be the source of information about themselves that other parties can trust. Without this property, the decentralized approach doesn't work. I describe the details here: A Universal Trust Framework. The proof of concept has three phases that are described below. When finished, we will have a prototype system that demonstrates all the technology and interactions required to put this architecture into use. The things we learn in the proof of concept will guide us as we roll the architecture out globally. Phase 0: A Learning Record Store The goal of Phase 0 is to build a basic student profile that has an API manager, learning record store, and some system for storing basic profile information. Basic Student Profile (click to enlarge) The system produced in Phase 0 is foundational. The student profile and learning record store (I'll use "student profile" to inclusively talk about the entire system from now on) provide the repository of student data. The student profile provides an API that supports event-based notification (mostly through the xAPI). The requirements for the system built in Phase 0 include the following: API Manager—the student profile will include a simple API manager. Profile data—the student profile will be capable of storing and providing (through an API) basic profile data. Learning record store—the student profile will include an xAPI-compatible LRS. xAPI notifications from Canvas—The student profile should accept xAPI calls from the University's test instance of Canvas and make, as necessary, University API calls to other campus systems. Permissioned Access—The student profile[...]



A Universal Trust Framework

Mon, 30 Jan 2017 17:17:05 -0700

Summary: The Internet has never had a universal trust framework before. Imagine if you could build the next sharing economy application without having to also build the platform that helps people trust. This post describes a universal trust framework that is open to all. Sovrin changes the world by providing a universal means of trusting. In We've stopped trusting institutions and started trusting strangers, Rachel Botsman talks about the "trust gap" that separates a place of certainty from something that is unknown. Some force has to help us "make the leap" from certainty to uncertainty and that force is trust. Traditionally, we've relied on local trust that is based on knowing someone—acquired knowledge or reputation. In a village, I know the participants and can rely on personal knowledge to determine who to trust. As society got larger, we began to rely on institutions to broker trust. Banks, for example, provide institutional trust by brokering transactions—we rely on the bank to transfer trust. I don't need to know you to take your credit card. But lately, as Botsman says, "we've learned that institutional trust isn't meant for the digital age." In the digital age, we have to learn to how to trust strangers. Botsman discusses sharing platforms like AirBnB and BlaBlaCar. You might argue that they're just another kind of institution. But there's a key difference: those platforms are bidirectional For example AirBnB lets guests rate their hosts, but also lets hosts rate guests. These platforms give more information about the individual in order to establish trust. But beyond platforms like AirBnB lies distributed trust based on blockchains and distributed ledgers. Botsman makes the point that distributed trust provides a system wherein you don't have to trust the individual, only the idea (e.g. distributed cash transactions) and the platform (e.g. Bitcoin). You can do this when the system itself make its difficult for the person to misrepresent themselves or their actions. When I send you Bitcoin, you don't have to trust me because the system provides provenance of the transaction and ensures that it happens correctly. I simply can't cheat. At a fundamental level, trust leads us to believe what people say. Online this is difficult because identifiers lacks the surrounding trustworthy context necessary provide the clues we need to establish trust. Dick Hardt said this back in 2007. The best way to create context around an identifier is to bind it to other information in a trustworthy way. Keybase does this, for example, to create context for a public key. Keybase creates a verifiable context for a public key by allowing its owner to cryptographically prove she is also in control of certain domain names, Twitter accounts, and Github accounts, among others. Keybase binds those proofs—the context—to the key. Potential users of the public key can validate for themselves the cryptographic proofs and decide whether or not to trust that the public key belongs to the person they wish to communicate with. Another key idea in reputation and trust is reciprocity. Accountability and a means of recourse when something goes wrong create an environment where trust can thrive. This is one of the secrets to sharing economy platforms like AirBnB. Botsman makes the point that she never leaves the towel on the floor of an AirBnB because the host "knows" her. She is accountable and there is the possibility for recourse (a bad guest rating). Trust Frameworks and Trust Transactions The phrase we use to describe the platforms of AirBnB, BlaBlaCar, and other sharing economy companies is trust framework. Simply put, a trust framework provides the structure necessary to leap between the known and unknown. For example, social login presents a trust leap for the Web sites relying on the social media site that's authenticating the user. When a user [...]



Every Computer is Distributed

Wed, 25 Jan 2017 14:35:53 -0700

Summary: My experience with a new LG 5K ultrafine monitor reminded me that modern personal computer systems are collections of computers.

(image)

I have one of the new 13" Macbook Pros with four USB C ports. I also have one of the new LG Ultrafine 5K monitors to use with it. Ever since I got the monitor, I've had all kinds of problems. The display will work for 10 minutes and then start resetting (acting like no display is connected, then reconnecting, then repeating) until the laptop gives up, freezes, and eventually restarts. I was hopeful the latest MacOS release might fix it, but it just kept on acting up, even after installing the latest.

I was about to give up on the monitor and just go back to my old reliable 27" Thunderbolt display. I tried disconnecting everything, rebooting the computer, using different ports...basically everything I could think of. Then I remembered one computer I hadn't rebooted: the monitor. Sure enough, unplugging the monitor from power so it was forced to reboot solved the problem.

My theory is that the monitor had gotten itself into a weird state that merely disconnecting it from the computer couldn't reset. The monitor is one of the computers in a distributed system. There are others, of course. We're already used to things like printers being computers in their own right because they have a network connection. But the monitor, keyboard, mouse, and even cables all have computers in them that have to cooperate to provide the personal computing experience.

So, next time you're computer isn't working, you might just need to reboot one of the cables.


As an aside, the LG 5K Ultrafine is an incredible monitor: bright and clear. But it's not much of a hub because it has three spare USB C ports so you still have to have all kinds of dongles. What's more, the 13" Macbook Pro can only drive one of them. And even with a 15" Macbook Pro you can't daisy chain the monitors—not enough bandwidth.

Tags:




Using Picos for BYU's Priority Registration

Wed, 04 Jan 2017 14:57:27 -0700

Summary: Picos are a natural way to build microservices. This post presents the results of an experiment we ran to see how the new Pico Engine performs when placed under surge loading that simulates BYU's priority registration. Every semester over 30,000 BYU students register for two to six sections out of about 6,000 sections being offered. During the priority registration period this results in a surge as students try to register for the classes they need. One proposed solution for handling the surge is to create a microservice representing sections that can spawn copies of itself as needed. Thinking reactively, I concluded that we might want to just have an independent actor to represent each section. Actors are easy to create and the underlying system handles the scaling for you. Coincidentally, I happen to have an actor-style programming system. Picos are an actor-model programming system that supports reactive programming. As such they are great for building micorservices. Bruce Conrad and Matthew Wright are reimplementing the KRE pico platform using Node JS. We call the new system the "pico engine." I suggested to Bruce that this class registration problem would be a fun way to test the capabilities of the new pico engine. What follows is the results of Bruce's experiment. Experimental Setup We were able to get the add and drop requests for the first two days of priority registration for Fall 2016. This data included 6,675 students making 44,505 registration requests against 5,393 sections. The root actor in a pico system is called the Owner Pico. The Owner Pico is the ancestor of all other picos. For this experiment, Bruce created two child picos for the Owner Pico to represent the collection of sections and the collection of students. The Section Collection Pico is the parent of a pico for each section. The Student Collection Pico is the parent of a pico for each student. The resulting pico system looked like the following diagram from the Pico Engine's developer console: Overall Setup for Registration System (click to enlarge) Bruce chose not to create picos for each student for this experiment. He programmatically created picos for each section. The following figure shows the resulting pico graph: Picos for Every Section Taught in Fall 2016 (click to enlarge) Each section pico has a unique identity and knows things like it's name, section id, and seating capacity. The pico also keeps a roster of students who have registered for the course. The picos instantiated themselves with this data by calling a microservice with the data. Using this configuration, Bruce replayed the add/drop requests as events to the pico engine. Events were replayed by sending an section:add or section:drop event to the Section Collection Pico (which routed requests to the individual section picos). After the requests have been replayed, we can query individual section picos to ensure that the results are as expected. This figure shows the JSON result of querying the section pico for STDEV 150-3. Queries were done programmatically to verify all results. Querying a section pico shows it has the right state (click to enlarge) Results Bruce conducted two tests using a pico engine running on a Macbook Pro (2.6 GHz Intel Core i5). The first replay, done with per-event timing enabled, required just under an hour for the 44,504 requests. The average time per request was about 80 milliseconds (of which approximately 30 milliseconds was overhead relating to measuring the time). A second replay, done with the per-event timing disabled, processed the same 44,504 add/drops in 35 minutes and 19 seconds. The throughput was 21 registration events per second or 47.6 milliseconds per request. For reference, the legacy system, running on several la[...]



Sovrin Use Cases: GPG as a Sovrin Client

Thu, 16 Feb 2017 15:53:03 -0700

Summary: GPG would make an excellent client for the Sovrin identity network and solve some of the problems that have prevented PGP from becoming a useful communication system. Lately, I’ve been thinking a lot about use cases for self-sovereign identity. This series of blog posts discusses how Sovrin can be used in different industries. In this article I discuss Sovrin and GPG. As I read I’m throwing in the towel on PGP, and I work in security from Filippo Valsorda, I couldn't help but think that it illustrates a real problem that Sovrin solves. Valdorda says: But the real issues, I realized, are more subtle. I never felt confident in the security of my long-term keys. The more time passed, the more I would feel uneasy about any specific key. Yubikeys would get exposed to hotel rooms. Offline keys would sit in a far away drawer or safe. Vulnerabilities would be announced. USB devices would get plugged in. A long-term key is as secure as the minimum common denominator of your security practices over its lifetime. It's the weak link. To see how Sovrin can help, let's talk about DIDs and DDOs. DIDs and DDOs A distributed ledger like Sovrin is a key-value store with entries that are temporally ordered and immutable. Decentralized Identifiers (DIDs) are intended to be one kind of key for a ledger. A DID is similar to a URN (universal resource name) and has a similar syntax. Here's an example DID: did:sov:21tDAKCERh95uGgKbJNHYp The ternary structure includes a schema identifier (did), a namespace identifier (sov in this case) and an identifier within that namespace (21tDAKCERh95uGgKbJNHYp) separated by colons. DIDs are meant to be permanently assigned and not reused. DIDs can be used to identify anything: person, organization, place, thing, or even concept. DIDs point to DDOs, or DID Descriptor Objects. A DDO is a JSON-LD-formatted data structure that links the DID to public keys, signatures, and service endpoints. We can use the signatures to validate that the DDO has not been tampered with, the service endpoint to discover more information about the DID, and the public key to secure communication with the entity identified by the DID. Sovrin is designed to support a different key pair for each DID. Consequently, a DID represents an identifier that can be specific to a particular relationship. Say I give 20 DIDs to friends I want to communicate with. Each associated DDO would contain a public key that I generated just for that relationship. I would sign these with a key that is well known and associated with social media and other well known online personas I maintain.1 The keys associated with these DIDs can be rotated as frequently as necessary since people never store my key, they only store the DID I give them for communicating with me. The ledger ensures that the most recent public key for a given DID can always be validated by checking the signature against the key associated with my well known DID. Of course, I've also stored DIDs for my friends and can check communications from them in the same way. These features, taken together, do away with the need for long-term keys and ease the burden of knowing the public key for someone you want to communicate with. So long as you know DID for the person, you can encrypt messages they can read. 2 A Proposal: GPG and Sovrin Which brings me to my proposal. Sometime in the first part of 2017, the Sovrin identity network will go live. The sandbox is available now. Many of the most advanced features of Sovrin will not be available in the MVP, but DIDs, DDOs, and public-private key pairs will be. Could GPG be modified to perform as a Sovrin Client? I believe the following would be required: Create DIDs with valid public keys on the Sovrin ledger3 Store and[...]



Sovrin Use Cases: Portable Picos

Wed, 28 Dec 2016 17:57:52 -0700

Summary: This article describes a method for using the Sovrin distributed identity ledger to lookup picos by name rather than location. This allows picos to be portable between hosting engines without loss of functionality. Lately, I’ve been thinking a lot about use cases for self-sovereign identity. This series of blog posts discusses how Sovrin can be used in different industries. In this article I discuss Sovrin and Picos. Anyone who reads this blog regularly knows that I've been working on a system for creating online agents using an event-driven, actor-model programming system called "persistent compute objects" or "picos" for short. Picos support a personal, decentralized model for the Internet of Things. Fuse, a connected-car platform, was designed to show how this model works. Picos support a hosting model that supports the same properties that the Internet itself has, namely decentralization, heterarchy, and interoperability. Central to these features is the idea of portability—a pico should be movable from one hosting platform (something we call a "pico engine") to another without loss of functionality. I believe that the future Internet of Things will require architectures where online "device shadows" (as Amazon calls them) are not only under the control of the owner, but hostable in a variety of places, not just on the manufacturer's servers. For example, I might buy a toaster that has a device shadow hosted by the Black and Decker and later decide to move that device shadow (along with its interaction history) to a hosting platform I control. Picos are a way of imagining postable device shadows that are under owner control. Names The most difficult problem standing in the way of easily moving picos between hosting engines is that picos receive messages using a URL that follows the format defined by the Sky Event Protocol. Because it's a URL, it's a location. Moving a pico from one engine to another requires updating all the places that refer to that pico, an impossible task. HTTP redirects could potentially be used, but they have scaling limitations I'd rather stay away from. Specifically, frequent moves might create long chains of redirects that have to be maintained since you can never be sure when every service with an interest in the pico had seen the redirect and updated their records. The more general solution is to have picos refer to each other by name and have some way of resolving a name to the current location of the pico. In the past this always led me to an uncomfortable reliance on some kind of centralized registry, but recent developments in distributed ledgers have made name resolution possible without reliance on a central registry. DIDs and DDOs A distributed ledger like Sovrin is a key-value store with entries that are temporally ordered and immutable. Decentralized Identifiers (DIDs) are intended to be one kind of key for a ledger. A DID is similar to a URN (universal resource name) and has a similar syntax. Here's an example DID: did:sov:21tDAKCERh95uGgKbJNHYp The ternary structure includes a schema identifier (did), a namespace identifier (sov in this case) and an identifier within that namespace (21tDAKCERh95uGgKbJNHYp) separated by colons. DIDs are meant to be permanently assigned and not reused. DIDs can be used to identify anything: person, organization, place, thing, or even concept. DIDs point to DDOs, or DID Descriptor Objects. A DDO is a JSON-LD-formatted data structure that links the DID to public keys, signatures, and service endpoints. We can use the signatures to validate that the DDO has not been tampered with, the service endpoint to discover more information about the DID, and the public key to secure communication with the entity identified by the DID. Using Sovrin with Picos D[...]



Sovrin Use Cases: Education

Wed, 07 Dec 2016 19:51:41 -0700

Summary: Sovrin's verifiable claims provide the means of creating a virtual university with little or no traditional integration between the various players. Lately, I’ve been thinking a lot about use cases for self-sovereign identity. This series of blog posts discusses how Sovrin can be used in different industries. In this article I discuss Sovrin and education. Last spring I wrote about how deconstructing the student information system at BYU is allowing us to create a virtual university. The idea is to create a decentralized system of student profiles that contain both profile data as well as learning records. These learning records can attest to any kind of learning activity from the student having read a few paragraphs to the completion of her degree. By making the student the point of integration, we avoid having to do heavy, expensive point-to-point integrations between all the student information systems participating in the educational initiative. This diagram shows a number of players in the virtual university (VU) system (labeled CES Initiative). They have no direct connections to each other, but present their own APIs, as does the student profile. As students interact with the various institutions, they present data in their profile (using an API). That's the only point of integration for these institutions, including VU itself. One of the design requirements for the student profile is that it be hostable where ever the student desires. We also put the student in control of authorizations to use data. These features mitigate regulatory requirements around privacy of student data. With millions of potential student spread among 188 different countries, this saves significant cost and headache trying to meet various regulatory hurdles. When the student is an active participant in sharing their data, the hurdles can be more easily overcome. The Role of Self-Sovereign Identity Self-sovereign identity can simplify many of the most difficult challenges of building the student profile. One of the key features of Sovrin is support for verifiable claims shared by their subject. Nearly everything in the student profile looks like a verifiable claim. Sovrin provides the infrastructure for issuing claims and using them in disclosures. Sovrin enables the receiver of the disclosure to verifying the claims in it. Verifiable claims also significantly ease the integration burden between the various systems involved in the virtual university. Say Alice wants to be a book keeper, but the book keeping microcourse has requirements for English language proficiency and secondary school completion, that she lacks. There could be four institutions involved in this scenario: Virtual University (VU), the English language certifier (ELC), the secondary education provider (SEP), and the college offering book keeping (BK). These last three institutions all have agreements with VU, but no technical integration with VU or each other. Alice has be admitted to VU and paid for the book keeping microcourse along with the require pre-requisites.1 She goes to ELC. Alice can prove she's a VU student and has paid for the English language certification by disclosing that information from the verifiable claims that VU wrote to her student profile. ELC can validate that the disclosure is true without talking VU directly. Alice registers for a course and, after she completes it, ELC writes a claim that she passed to her student profile. Alice, guided by the Student Profile, heads to SEP and discloses that she's a VU student. SEP validates her student status and that she's paid for the SEP program. They ask her to prove that she has the required level of english competancy and she provides a disclosure based on the claim that ELC wrote. She completes the secondary educa[...]