Subscribe: Phil Windley's Technometria
Added By: Feedage Forager Feedage Grade A rated
Language: English
engine  identity  ledger  model  network  pico engine  pico  picos  sovrin foundation  sovrin  things  trust framework  trust 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Phil Windley's Technometria

Phil Windley's Technometria

Building the Internet of My Things

Last Build Date: Thu, 27 Jul 2017 19:12:28 -0600

Copyright: Copyright 2017

A Mesh for Picos

Wed, 26 Jul 2017 13:20:50 -0600

Summary: This post describes some changes we're making to the pico engine to better support a decentralized mesh for running picos. Picos are Internet-first actors that are well suited for use in building decentralized soutions on the Internet of Things. Here's a few resources for exploring the idea of picos and our ideas about they enable a decentralized IoT if you’re unfamiliar with the idea: Picos: Persistent Compute Objects—This brief introduction to picos and the components that make up the pico ecosystem is designed to make clear the high-level concepts necessary for understanding picos and how they are programmed. Over the last year, we've been replacing KRE, the engine picos run on, with a new, Node-based engine that is smaller and more flexible. Reactive Programming with Picos—This is an introduction to picos as a method for doing reactive programming. The article contains many links to other, more detailed articles about specific topics. You can thus consider this an index to a detailed look at how picos work and how they can be programmed. Promises and Communities of Things—Promise theory provides a tool for thinking about and structuring the code that implements communities in groups of social things. This blog post discusses some initial thinking about promises and picos. Social Things, Trustworthy Spaces, and the Internet of Things—Social things interacting in trustworthy spaces represent a model for an Internet of Things that is scalable to trillions of devices and still works. This post describes that concept and proposes picos as a platform for building social things. Market intelligence that flows both ways—Doc Searls talks about how pico-based shadows of products are useful in creating new customer service interaction patterns. The pico-based system he references, SquareTag, is no longer in active development, but we're replacing it with a new system called Manifold that reflects the community of things ideas I reference above. The New Pico Engine Picos run on an engine that provides the support they need for Internet services, persistent storage, and identity. Over the past year, we've been rebuilding the pico engine. The Classic Pico Engine was a big cloud service. The New Pico Engine is a small, nibble Node program. We recently built several demos with the pico engine to show this. Both used the pico engine running on a Raspberry Pi in IoT scenarios. The first was a demo of a computer closet (here's a wrietup of an early version of that demo) that uses temperature sensors and several fans to show how picos can cooperate to achieve some goal (in this case keeping the closet sufficiently cool). The second was an overengineered Jack-in-the-Box that had a pico engine running internally to play music and spring the door. A Mesh for Picos Picos operate in a mesh, regardless of where they're hosted. Picos don't have to be running on the same pico engine in order to form relationships with each other and interoperate. We've begun a program to more fully support this ideal by making the engine itself part of a larger mesh of engines, each capable of hosting any of the picos in that ecosystem. The vision is that someone using a pico doesn't need to know what engine it's hosted on and that picos can move easily from engine to engine, moving computation close to where it's needed. There are two problems to solve in order to make this a reality: Picos are addressed by URL, so the pico engine's host name becomes part of the pico's address Picos have a persistence layer that is currently provided by the engine the picos is hosted on. We propose to solve the first problem using the Sovrin identity platform. I wrote about that idea in some detail in Sovrin Use Cases: Portable Picos. The synopsis is that Sovrin allows us to find picos by name rather than location using the distributed ledger as a storage location for names based on decentralized identifiers (DIDs) that point to the current location of[...]

Sovrin Status: Provisional Trust Framework

Sat, 08 Jul 2017 09:56:49 -0600

Summary: The Sovrin Trust Framework is nothing less that the constitution of the Sovrin Network. The Provisional Trust Framework was recently completed and approved. This post gives details about what the Sovrin Trust Framework is and why it's so important. Last week, the Sovrin Foundation Board of Trustees voted unanimously to approve the Sovrin Provisional Trust Framework (PTF). Trust is the primary benefit of Sovrin. I've written before about Sovrin as a universal trust framework and Sovrin's web of trust model. The Provisional Trust Framework is a set of legal documents that provide the foundation for Sovrin's trust model. Sovrin is a permissioned ledger, meaning it achieves consensus using a known set of validators—in this case, trusted institutions around the world. This is in contrast to permissionless ledgers like Bitcoin's blockchain that rely on validators who are not identified. The permissioned model has advantages in both speed and cost of transactions. But it means that Sovrin needs governance. The PTF is the set of documents that spells out how various participants in the network must behave and the agreements that Sovrin Stewards make in order to operate a validator node. Each Steward will sign the Sovrin Founding Steward Agreement before they operate a validator node on the Sovrin network. Sovrin is also a public ledger, meaning anyone can use it. This is in contrast to other permissioned ledgers that are operated as private systems for their owner's specific purposes. Sovrin is designed to operate as a global public utility for identity. Sovrin can be used by anyone. The PTF supports this goal, enumerating the participants in the network and their obligations and qualifications. The PTF also spells out how Sovrin's Web of Trust model works. The PTF outlines: the principles that govern the Sovrin network and by extension the Sovrin Foundation and Stewards; key definitions and terminology; the obligations of the Sovrin Foundation; business policies for identity owners, stewards, trust anchors, and guardians; legal policies for identity owners, stewards, agencies, and developers; legal policies for dispute resolution; legal policies for confidentiality; technical policies for node operation, monitoring and reporting, write permissions, transaction limitations, and service levels; and policies for amending the PTF. The PTF governs the operation of the Sovrin network while in "provisional" status, meaning during its first months of official operation when it is still being “battle-tested” in live operation by the Founding Stewards. The PTF has a number of additional sections that must be completed, approved, and agreed to before the Sovrin network is declared ready for general availablity. At that point these documents will graduate and be called the Sovrin Trust Framework. The completion of the PTF is an important step for the network. The Trust Framework Working Group and it's chair, Drummond Reed, have spent long hours hammering out this vital set of documents. With the PTF's completion and approval, Sovrin takes one more important step to being a reality. Tags: sovrin trust+framework trust web+of+trust [...]

Sovrin Status: Alpha Network Is Live

Fri, 23 Jun 2017 18:39:00 -0600

Summary: The Sovrin Network is live and undergoing testing. This Alpha Stage will allow us to ensure the network is stable and the distributed nodes function as planned.


Sovrin is based on a permissioned distributed ledger. Permissioned means that there are known validators that achieve consensus on the distributed ledger. The validators are configured so as to achieve Byzantine fault tolerance but because they are known, the network doesn't have to deal with Sybil attacks. This has several implications:

  1. The nodes are individually unable to commit transactions, but collectively they work together to create a single record of truth. Individual nodes are run by organizations called "Sovrin Stewards."
  2. Someone or something has to chose and govern the Stewards. In the case of Sovrin, that is the Sovrin Foundation. The nodes are governed according to the Sovrin Trust Framework.

The Sovrin Network has launched in alpha. The purpose of the Alpha Network is to allow Founding Stewards to do everything necessary to install and test their validator nodes before we collectively launch the Provisional Network. It’s our chance to do a dry-run to work out any kinks that we may find before the initial launch.

Here’s what we want to accomplish as part of this test run:

  • Verify technical readiness of the validator nodes
  • Verify security protocols and procedures for the network
  • Test emergency response protocols and procedures
  • Test the distributed, coordinated upgrade of the network
  • Get some experience running the network as a community
  • Work out any kinks and bugs we may find.

With these steps complete, Sovrin will become a technical reality. It’s an exciting step. We currently have nine stewards running validators nodes and expect more to come online over the next few weeks. Because the Alpha Network is for conducting tests, we anticipate that the genesis blocks on the ledger will be reset once the testing is complete.

Once the Alpha Network has achieved it's goals, it will transition to the Provisional Network. The Sovrin Technical Governance Board (TGB) chose to operate the network in a provisional stage as a beta period where all transactions were real and permanent, but still operating under a limited load. This will enable the development team and Founding Stewards to do performance, load, and security testing against a live network before the Board of Trustees declares it generally availabile.

After many months of planning and working for the network to go live, we're finally on our way. Congratulations and gratitude to the team at Evernym doing the heavy lifting, the Founding Stewards who are leading the way, and the many volunteers who sacrifice their time to build a new reality for online identity.

Photo Credit: Sunrise from dannymoore1973 (CC0 Public Domain)


Correlation Identifiers

Tue, 06 Jun 2017 15:58:09 -0600

Summary: Correlation identifiers are one of the ideas we talk about in my Distributed Systems class during the Reactive Programming module.

Let's talk for a bit about correlation identifiers. Correlation IDs are used to identify the different parts of a computation when they happen concurrently. Picos are single threaded, so you never have to worry about this in a single evaluation cycle (response to one event) in a given pico. But as soon as there is more than one actor involved in a computation, you run the risk that things might not happen in order and parts of your computation will come back differently than you expect.

For example, in Fuse, creating fleet report was initiated by an event to the fleet pico. This, in turn, caused the fleet pico to send events to the vehicle picos asking them to do part of the computation. The vehicle picos responded asynchronously and the fleet pico combines the results to create the completed report. Suppose the initiating event is received twice in quick succession, the vehicle picos will get multiple requests. It's possible (due to network delays, for example) that the responses will come back out of order. This picture shows the situation:

Asynchronous responses can lead to out or order reports (click to enlarge)

The response pattern on the left is what we want, but the one of the right is one of many possible combinations that could occur in practice. The colors of the two events and the events they spawn (red or blue) are a pictoral correlation identifier. We can easily save the various responses and then combine them into the right report later if we know all the red responses go together and all the blue responses go together.

To make correlation IDs work in KRL, you wouldn't use colors, of course, but rather a random string for each initiating event. So long as all the participants pass this ID along as an event attribute in the various computations that occur, the fleet pico will be able to use the right pieces to create each report.


Updated Pico Programming Workflow

Tue, 06 Jun 2017 14:04:42 -0600

Summary: This page introduces the tool chain in the programming workflow for picos.


I just got done updating the page in the Pico documentation that talks about the pico programming workflow. I use the idea of a toolchain as an organizing principle. I think it turned out well. If you program picos, it might be of some help.


Sovrin Web of Trust

Wed, 24 May 2017 21:39:45 -0600

Summary: Sovrin uses a heterarchical, decentralized Web of Trust model to build trust in identifiers and give people clues about what and who to trust. The Web of Trust model for Sovrin is still being developed, but differs markedly from the trust model used by the Web. The Web (specifically TLS/SSL) depends on a hierarchical certificate authority model called the Public Key Infrastructure (PKI) to determine which certificates can be trusted. When your browser determines that the domain name of the site you're on is associated with the public key being used to encrypt HTTP transmissions (and maybe that they’re controlled by a specific organization), it uses a certificate it downloads from the Website itself. How then can this certificate be trusted? Because it was cryptographically signed by some other organization who issued the public key and presumably checked the credentials of the company buying the certificate for the domain. But that raises the question "how can we trust the organization that issued the certificate?" By checking its certificate, and so on up the chain until we get to the root (this process is called “chain path validation”). The root certificates are distributed with browsers and we trust the browser manufacturers to only include trustworthy root certificates. These root certificates are called “trust anchors” because their trustworthiness is assumed, rather than derived. Webs of Trust In contrast, a Web of Trust model uses webs of interlocking certificates, signed by multiple parties to establish the veracity of a certificate (see Web of Trust on Wikipedia for more detail). An entity can be part of many overlapping webs of trust because its certificate can be signed by many parties. Sovrin uses a heterarchical, decentralized Web of Trust model, not the hierarchical PKI model, to establish the trustworthiness of certificates. The methodology for doing this is built into the Sovrin system, not layered on top of it, by combining a decentralized ledger, decentralized identifiers, and verifiable claims. While Sovrin Foundation will establish some trust anchors1 (usually the same Stewards who operate validator nodes on the Sovrin network), these are merely for bootstrapping the system and aren’t necessary for the trusting the system once it is up and running. Sovrin uses this trust system to protect itself from DDoS attacks, among other things. By allowing an identifier to be part of multiple webs of trust, Sovrin allows for reputational context to be taken into account. For example, being a good plumber doesn’t guarantee that a person will be a good babysitter, but a person who was a good plumber and a trustworthy babysitter could be part of different webs of trust that take these two contexts into account. The makes the overall trust model much more flexible and adaptable to the circumstances within which it will be used. PKI is good for one thing on the Web: showing the public key used to secure HTTP transmissions is correct. In contrast, Sovrin’s decentralized web of trust model is good for anything people need. The goal of Sovrin is to provide the infrastructure upon which these overlapping webs of trust can be built for various applications. Lyft, Airbnb, and countless other sharing economy businesses are essentially specialized trust frameworks. Sovrin provides the means of creating similar trust frameworks without the need to build the trust infrastructure over and over. Building Trust Trust anchors are not the only way one could build trust in an identifier for a given purpose. I can think of the following ways that an identifier could come to be trusted: Personal knowledge — The identifier is personally known to you through interactions outside Sovrin. People close to me would be in this category. This category also includes identifiers like those for my employer or o[...]

Sovrin In-Depth Technical Review

Mon, 22 May 2017 11:02:17 -0600

Summary: Sovrin Foundation has engaged Engage Identity to perform a security review of Sovrin's technology and processes. Results will be available later this summer.


The Sovrin Foundation and Engage Identity announced a new partnership today. Security experts from Engage Identity will be completing an in-depth technical review of the Sovrin Foundation’s entire security architecture.

Sovrin Foundation is very concerned that the advanced technology utilized by everyone depending on Sovrin is secure. That technology protects many valuable assets including private personal information and essential business data. As a result, we wanted to be fully aware of the risks and vulnerabilities in Sovrin. In addition, The Sovrin Foundation will benefit from having a roadmap for future security investment opportunities.

We're very happy to be working with Engage Identity, a leader in the security and identity industry. Established and emerging cryptographic identity protocols are one of their many areas of expertise. They have extensive experience providing security analysis and recommendations for identity frameworks.

The Engage Identity team is lead by Sarah Squire, who has worked on user-centric open standards for many organizations including NIST, Yubico, and the OpenID Foundation. Sarah will be joined by Adam Migus and Alan Viars, both experienced authorities in the fields of identity and security.

The final report will be released this summer, and will include a review of the current security architecture, as well as opportunities for future investment. We intende to make the results public. Anticipated subjects of in-depth research are:

  • Resilience to denial of service attacks
  • Key management
  • Potential impacts of a Sovrin-governed namespace
  • Minimum technical requirements for framework participants
  • Ongoing risk management processes

Sovrin Foundation is excited to take this important step forward with Engage Identity to ensure that the future of self-sovereign identity management can thrive and grow.


Life-Like Anonymity

Sat, 13 May 2017 10:08:26 -0600

Summary: Natural anonymity comes from our ability to recognize others without the aid of an external identity system. Online interactions can only mirror life-like anonymity when we have decentralized identity systems that don't put all unteractions under the purview of centralized administrative systems. At VRM-day prior to Internet Identity Workshop last week, Joyce Searls commented that she wants the same kind of natural anonymity in her digital life that she has in real life. In real life, we often interact with others—both people and institutions—with relative anonymity. For example, if I go the store and buy a coke with cash there is no exchange of identity information necessary. Even if I use a credit card it's rarely the case that the entire transaction happens under the administrative authority of the identity system inherent in the credit card. Only the financial part of the transaction takes place in that identity system. This is true of most interactions in real life. In contrast, in the digital world, very few meaningful transactions are done outside of some administrative identity system. There are several reasons why identity is so important in the digital world: Continuity—Because of the stateless nature of HTTP, building a working shopping cart, for example, requires using some kind of token for correlation of independent HTTP transactions. These tokens are popularly known as cookies. While they can be pseudonymous, they are often correlated across multiple independent sessions using a authenticated identifier. This allows, for example, the customer to have a shopping cart that persists across time on different devices. Convenience—So long as the customer is authenticating, we might as well further store additional information like addresses and credit card numbers for their convenience, to extend the shopping example. Storing these allows the customer to complete transactions without having to enter the same information over and over. Trust—There are some actions that should only be taken by certain people, or people in certain roles, or with specific attributes. Once a shopping site has stored my credit card, for example, I ought to be the only one who can use it. Identity systems provide authentication mechanisms as the means of knowing who is at the other end of the wire so that we know what actions they're allowed to take. This places identifiers in context so they can be trusted. Surveillance—Identity systems provide the means of tracking individuals across transactions for purposes of gathering data about them. This data gathering may be innocuous or nefarious, but there is no doubt that it is enabled by identity systems in use on the Internet. In real life, we do without identity systems for most things. You don't have to identify yourself to the movie theater to watch a movie or log into some system to sit in a restaurant and have a private conversation with friends. In real life, we act as embodied, independent agents. Our physical presence and the laws of physics have a lot to do with our ability to function with workable anonymity across many domains. So, how did we get surveillance and it's attendant affects on natural anonymity as an unintended, but oft-exploited feature of administrative digital identity systems? Precisely because they are administrative. Legibility Legibility is a term used to describe how administrative systems make things governable by simplifying, inventorying, and rationalizing things around them. Venkatesh Rao nicely summarized James C. Scott's seminal book on legibility and its unintended consequences: Seeing Like a State. Identity systems make people legible in order to offer continuity, convenience, and trust. But that legibility also allows surveillance. In some respec[...]

Hyperledger Welcomes Project Indy

Tue, 02 May 2017 08:14:26 -0600

Summary: The Sovrin Foundation announced at the 24th Internet Identity Workshop (IIW) that its distributed ledger, custom-built for independent digital identity has been accepted into incubation under Hyperledger, the open source collaborative effort created to advance cross-industry blockchain technologies hosted by The Linux Foundation. We’re excited to announce Indy, a new Hyperledger project for supporting independent identity on distributed ledgers. Indy provides tools, libraries, and reusable components for providing digital identities rooted on blockchains or other distributed ledgers so that they are interoperable across administrative domains, applications, and any other silo. Why Indy? Internet identity is broken. There are too many anti-patterns and too many privacy breaches. Too many legitimate business cases are poorly served by current solutions. Many have proposed distributed ledger technology as a solution, however building decentralized identity on top of distributed ledgers that were designed to support something else (cryptocurrency or smart contracts, for example) leads to compromises and short-cuts. Indy provides Hyperledger projects and other distributed ledger systems with a first-class decentralized identity system. Indy’s Features The most important feature of a decentralized identity system is trust. As I wrote in A Universal Trust Framework, Indy provides accessible provenance for trust to support user-controlled exchange of verifiable claims about an identifier. Indy also has a rock-solid revocation model for cases where those claims are no longer true. Verifiable claims are a key component of Indy’s ability to serve as a universal platform for exchanging trustworthy claims about transactions. Provenance is the foundation of accountability through recourse. Another vital feature of decentralized identity&mdaqsh;especially for a public ledger—is privacy. Privacy by Design is baked deep into Indy architecture as reflected by three fundamental features: First, identifiers on Indy are pairwise unique and pseudonymous by default to prevent correlation. Indy is the first DLT to be designed around Decentralized Identifiers (DIDs) as the primary keys on the ledger. DIDs are a new type of digital identifier that were invented to enable long-term digital identities that don’t require centralized registry services. DIDs on the ledger point to DID Descriptor Objects (DDOs), signed JSON objects that can contain public keys and service endpoints for a given identifier. DIDs are a critical component of Indy’s pairwise identifier architecture. Second, personal data is never written to the ledger. Rather all private data is exchanged over peer-to-peer encrypted connections between off-ledger agents. The ledger is only used for anchoring rather than publishing encrypted data. Third, Indy has built-in support for zero-knowledge proofs (ZKP) to avoid unnecessary disclosure of identity attributes—privacy preserving technology that has been long pursued by IBM Research (Idemix) and Microsoft (UProve), but which a public ledger for decentralized identity now makes possible at scale. Indy is all about giving identity owners independent control of their personal data and relationships. Indy is built so that the owner of the identity is structurally part of transactions made about that identity. Pairwise identifiers stop third parties from talking behind the identity owner’s back, since the identity owner is the only place pairwise identifiers can be correlated. Sovereign Identity (click to enlarge) Indy is based on open standards so that it can interoperate with other distributed ledgers. These start, of course, with public-key cryptography standards. Other important standards cover things like the [...]

Pico Programming Lesson: Modules and External APIs

Tue, 18 Apr 2017 15:25:00 -0600

Summary: A new pico lesson is available that shows how to use user-defined actions in modules to wrap an API.


I recently added a new lesson to the Pico Programming Lessons on Modules and External APIs. KRL (the pico programming language) has parameterized modules that are great for incorporating external APIs into a pico-based system.

This lesson shows how to define actions that wrap API requests, put them in a module that can be used from other rulesets, and manage the API keys. The example (code here) uses the Twilio API to create a send_sms() action. Of course, you can also use functions to wrap API requests where appropriate (see the Actions and Functions section of the lesson for more detail on this).

KRL includes a key pragma in the meta block for declaring keys. The recommended way to use it is to create a module just to hold keys. This has several advantages:

  • The API module (Twilio in this case) can be distributed and used without worrying about key exposure.
  • The API module can be used with different keys depending on who is using it and for what.
  • The keys module can be customized for a given purpose. A given use will likely include keys for multiple modules being used in a given system.
  • The pico engine can manage keys internally so the programmer doesn't have to worry (as much) about key security.
  • The key module can be loaded from a file or password-protected URL to avoid key loss.

The combination of built-in key management and parameterized modules is a powerful abstraction that makes it easy to build easy-to-use KRL SDKs for APIs.

Going Further

The pico lessons have all been recently updated to use the new pico engine. If you're interested in learning about reactive programming and the actor model with picos, walk through the Quickstart and then dive into the lessons.

Photo Credit: Blue Marble Geometry from Eric Hartwell (CC BY-NC-SA 3.0)