Subscribe: Donald Clark Plan B
http://donaldclarkplanb.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
book  content  difficult  it’s  jobs  knowledge  learning  networks  online learning  online  people  technology  training   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Donald Clark Plan B

Donald Clark Plan B



What is Plan B? Not Plan A!



Updated: 2018-02-13T10:50:13.405+00:00

 



Online healthcare learning - in minutes not months

2018-02-01T08:05:37.112+00:00

Healthcare is a complex business. So many things to learn, so much new knowledge to constantly master. The sector is awash with documents from compliance to clinical guidelines, all with oodles of detail and never enough time to train, retain and recall. As it is patients health, even lives, that matter, there’s little room for error. Yet so much training is still delivered via lectures and PowerPoint in rooms full of professionals who are badly needed on the front line. There must be a better way to deliver this regulatory and clinical knowledge?Online learning is part of the solution but traditional online learning takes months to produce and even one 50 page clinical guideline is often prohibitively expensive. With this in mind, rather than use tools where most of the budget goes on graphics and not interaction, AI is producing tools that do this for you. One of those tools is WildFire, a service that creates high-retention in minutes not months at a faction of previous costs.SourcesSo far we’ve delivered a lot of content to a range of organisations from a range of pharmaceutical companies and a Royal College to the NHS. The content originated as:·      Documents·      PowerPoints·      Podcasts·      VideosEasy inputWith a modest amount of preparation, one takes the text file (or automatically created transcripts from podcasts and video) and cut and paste them into WildFire, which identifies what it thinks are the main learning points. Taking our lead from recent research in cognitive science, well summarised by researchers in Make It Stick, we focus not on multiple-choice questions (see weaknesses here) but open input, even voice, if desired. Open input is superior to MCQs as it results in better retention and recall.FrictionlessNote that healthcare documents are often highly regulated, and the fact that we take the original document means we are not breaking that covenant. It also means almost no friction between designers and subject matter experts. The content has already been signed-off – we use that content in an unadulterated form.Effortful learningThe learner has to literally type in the correct answers, identified by our AI engine. But we do much more. We also get the AI to identify links out to supplementary content. This is done automatically. This works well in healthcare, as the vocabulary, definitions and concepts can be daunting. ChunkingWe break the content down into small 10-15 minute learning experiences. This is necessary for focus as well as frequency of formative assessment. So a large compliance or clinical guideline document, such as a NICE Guideline, can be broken down meaningfully and accessed, as and when needed.CompetenceAt the end of each pass through one of these short modules, your knowledge is assessed as Green (known), Amber (nearly known) or Red (not known). You must repeat the Ambers and Reds until you reach full 100% competence. This matters in healthcare. Getting 70% is fine but the other 30% can kill.CurationWe don’t stop there. At the end of each module you can add curated content (again using AI) by searching for content directly related to the modules at hand from the selected learning points. This guided curation increases relevance. This is the stuff that you could know, as opposed to the stuff you should know.Types of contentThis is about moving from reading to retention. One clinical guideline may be intended for many audiences, clinicians, various healthcare professionals, carers, even patients. Updates can be delivered separately when they are published. In general, WildFire has been used for:·      Peer-reviewed medical papers·      Royal College clinical Guidelines·      NICE Guidelines·      Clinician in charge of trial podcasts·      Question & answer session with experts·      Clinician in charge of [...]



7 ways Africa can use AI to leapfrog into the future

2018-01-27T10:03:53.944+00:00

Africa is huge. Just how huge is rarely appreciated but this map helps. This massive landmass makes land transport difficult, physical internet cabling difficult, infrastructure difficult. But with two spots from one satellite, it is possible to cover the entire continent. Bad or non-exiting infrastructure is the condition for leapfrogging. So here's a question.... What African leapfrogged the transport and energy sectors to such a degree that the oil economy look as though it’s on the way out? He did this by seeing the existing model as the problem – the oil economy - so created the self-driving, actually AI-driven, car and panels/batteries that change the way we power homes, even entire regions. He is, of course, Elon Musk,  a leapfrogger. But Africa leaps over frogs in all sorts of ways, from mobile banking to drones for blood delivery.The first technology, the stone axe, was invented by early hominoid species in the rift valley in Africa, that allowed us to leapfrog other species, who may have been stronger and faster, but lacked the technology to compete. The first writing in the Nile Valley, again in Africa, on the first flexible writing material, Papyrus, also invented in Africa, allowed the Egyptians to leapfrog other civilisations, a stable civilisation that lasted continuously for 4000 years, longer than any other Empire ever. The very tools and technology that the modern world is built on were first seen in Africa.There’s a lesson here – the ‘Leapfrog Principle’. This is the idea that one can innovate in environments where precedents and incumbents are poor, primitive or absent, easier than in wealthier or technologically richer environments. Africa can, again, be the crucible for leapfrog ideas and development. In finance, healthcare, energy, agriculture and education, AI can augment and improve productivity.Leapfrog 1  Mobile bankingAfrica had little in the way of a retail banking infrastructure and most people did not have a bank account. Along comes the ubiquity of cheap mobile devices and Africa does what richer countries are only now waking up to – mobile banking. In its wake came advantages in communications, finding work, paying bills and agricultural information - markets, teachniques and so on. The runaway success of M-Pesa, the mobile money transfer service launched by Safaricom, Kenya’s largest mobile operator and Vodafone, in 2007, has allowed millins to pay bills, buy goods, receive remittances from abroad and even access learning. None of this would have been possible without AI-driven encryption and now AI as the new UI interfaces. Leapfrog 2  ZiplineTake Zipline, in Rwanda, where drones deliver blood to rural locations. Doctors request blood for ‘at risk’ patients and drones deliver, dropping the protected packages by parachutes, from 30 feet, into the backyard of the clinic, aided by GPS and navigational software. This is fast, cheap, efficient and saves lives. Why Rwanda? Well the road infrastructure prohibits speed of delivery, there is less regulation to hold back these innovations and, as a small country, it is ambitious and willing to take more risks. Older countries tend to become more risk averse. Strangely enough it is sometimes the absence of physical infrastructure; roads, fixed line telephone networks, transport options, power stations, oil reserves, that make leapfrogging more likely. The investment in leapfrog technology has less competitive pressure from incumbent technology and infrastructure. Leapfrog 3  Offgrid ElectricThe International Energy Agency states that there are over 600 million people in sub-Saharan Africa that do not have access to electricity. An African startup, Off Grid Electric, backed by Solar City, wants to rack up the supply of solar panels across Africa, with at an affordable charge of $7 a month for the system. It already powers 125,000 households. Musk has taken technology built for the wealthy car industry and applies it in a modular, LED, robust, affordable way to an[...]



Nurses – why degrees are not the answer for NHS crisis

2018-01-20T11:16:27.400+00:00

The graduate demand for nursing is a filter excluding many who used to go into the profession. My mother was a nurse, my sister was a nurse for 30 plus years – neither would now have got into the profession because of the academic entrance demands. Working class youngsters, in particular, are being excluded and I do not believe for one minute that they are unsuitable. The nursing shortage is not only caused by this hurdle but it has been exacerbated by the need for a someone who wants to be a nurse to get University entrance qualifications, spend years at University, then exit into a relatively low paid profession, with a huge loan.Alan Ryan, who was a nurse for 20 years, and knows a thing or two about training in the NHS, said something quite profound in Berlin recently. “All of our jobs (in NHS) are, in practice, apprenticehips, from consultants to cleaners”. His point was that healthcare is an eminently, practical affair. He supports alternative routes into healthcare professions.In truth Higher Education in the UK has land-grabbed vocational education, mainly on the basis of increasing their revenues. What were adequate, shorter and more experiential, training courses are now degrees, making them longer and far more expensive, whether for the state or individual.Universities may claim to be about critical thinking but a glance at some of the degrees on offer show that this is far from the truth – Dentists, Doctors, Nurses, Lawyers, Engineers and so on. In truth, as Roger Schank tells us, Universities are about “creating Scholars”, and, as he says “we have enough Scholars already”. I’d add that there may be a surplus, as shown by the ease at which adjuncts can be hired to do the ‘teaching’ even in top Universities. This is not the environment into which nursing easily fits.We have many nurses from other countries, even the EU, such as Germany, who are hired without going through this University experience, so it is not as if it is a necessary condition for success. Those who deliver such courses will claim that a nurse’s job is more complex than it used to be and of that, I have no doubt. But complex does not necessarily mean more lecturing and theory. 'It is difficult to get a man to understand something, when his salary depends on his not understanding it’ said Upton Sinclair, and it is difficult to get something out of the world of lectures and essays once ‘Lecturers’ get a hold of it.There are many causes to hte current nursing crisis:failure to plan for demanddegree course entrance qualificationsabolishing bursariesnew English testsagency costsforeign country demandsworking conditions But part of the solution here is to reverse this policy of Nursing degrees, not by demolishing that option but opening up alternative routes, especially apprenticeships. A vocational route is badly needed, and should have been opened up years ago. In practice, we depended on migrant labour and extortionate agency fees. We didn’t have to pay for their education, which is neither good for us or the countries from which they came, but that is not a good excuse for the failure to train our own nurses. Brexit will at least slow that process bu we need an alternative route. The nursing assistant route is a start - we need much, much more.[...]



AI just outperformed humans at reading, potentially putting millions of customer service jobs at risk of automation. Could it do the same in learning?

2018-01-24T08:55:14.348+00:00

Something momentous just happened. An AI programme, from Alibaba, can now, for the first time, read a text and understand it better than humans. The purple line has just crossed the red line and the implications are huge.Think through the consequences here, as this software, using NLP and machine learning, gets better ad better. The aim is to provide answers to questions. This is exactly what millions of people do in jobs around the world. Customer service in call centres, Doctors with patients, anywhere people reply to queries... and any interactions where language and its interpretation matter.Health warningFirst we must be careful with these results, as it depends on two things 1) the nature of the text 2) what we mean by ‘reading’. Such approaches often work well with factual texts but not with more complex and subtle texts, such as fiction, where the language is difficult to parse and understand, and where there is a huge amount of ‘reading between the lines”. Think about how difficult it is to understand even that last sentence. Nevertheless, this is a breakthrough.The TestIt is the first time a machine has out-done a real person in such a contest. They used the Stanford Question Answering Dataset, to assess reading comprehension. The test is to provide exact answers to more than 100,000 questions. As an open test environment, you can do it yourself, which makes the evidence and results transparent. Alibaba’s neural network model, based on a Hierarchical Attention Network, which reads down through paragraphs to sentences to words, identifies potential answers and their probabilities. Alibaba has already used this technology in their customer service chatbot, Dian Xiaomi, to an average of 3.5 million customers a day on the Taobao and Tmall platforms. (10 uses for chatbots in learning).LearningIndeed, the one area that is likely to benefit hugely from these advances is education and training. The Stanford dataset does have questions that are logically complex and, in terms of domain, quite obscure, but one should see this development as great at knowledge but not yet effective with questions beyond this. That’s fine as there is much that can be achieved in learning.We have been using this AI approach to create online learning content, in minutes not months, through WildFire. Using a  similar approach, we identify the main learning points in any document, PPT or video, and build online learning courses quickly, with an approach based on recent cognitive psychology that focuses on retention. In addition, we add curated content.PedagogyThe online learning is very different from the graphics plus multiple-choice paradigm. Rather than rely on the weak ‘select from a list’ MCQs (see critique here), we get learners to enter their answers in context. It focuses on open-input and retention techniques outlined by Roedinger and McDaniel in Make It Stick. SpeedTo give you some idea of the sheer speed of this process we recently completed 158 modules for a global company, literally in days, without a single face-to-face meeting with the project manager. The content was then loaded up to their LMS and is ready to roll. This was good content and they are very happy with the results.Pain reliefAn interesting outcome of this approach to creating content was the lack of heat generated during the production process. There was no SME/designer friction, as that was automated. That’s one of the reasons we didn’t need a single face-to-face meeting. It allowed us to focus on getting it done and quality control.SectorsOrganisations have been using this AI-created content as pre-training for face-to-face training for auditors in Finance, product knowledge and GMP in Manufacturing, health and safety, everything from nurse training to clinical guidelines in the NHS, apprenticeships in a global Hospitality company. All sorts of education and training in all sorts of contexts.Conclusion [...]



Superfast AI creation of online learning in manufacturing - fast, cheap, effective

2018-01-15T18:42:18.118+00:00

We clearly have a productivity problem in manufacturing, in part due to a lack of training and skills. As manufacturing becomes more complex and automated, it needs lots of skills other than those traditionally repetitive jobs that are being replaced. Could AI help solve this problem? AI may lead to a loss of jobs but we’re showing that AI can also help train in what jobs there are to increase productivity and help in training for new jobs. We’ve been creating online learning quickly and at low cost through WildFire.Productivity puzzleThe manufacturing sector continues to struggle for productivity, despite growing levels of economic activity. Manufacturing productivity actually fell by 0.2 per cent in the third quarter of 2016, compared to 0.3 per cent growth in services. Many attribute this, at least partially, to low skills and training. As productivity growth seems to have stalled, technology offers a reboot, both in process and learning. Typically ‘basic goods’ manufacturing has been stuck with the rather basic use of technology. This is in stark contrast to ‘advanced manufacturing’ which has been eager to adopt advanced technology. Both, however, have been tardy in their use of technology to get knowledge and skills to their staff. They have both been far behind those in finance, healthcare, hospitality and other sectors. Understandably, learning in manufacturing has been largely classroom and learning by doing. Yet, as manufacturing becomes more complex, knowledge and skills has become ever more important.Double-dividendOne immediate way to increase productivity is through online learning. This has a double-dividend, in that it can save costs (travel, rooms, equipment and trainers) as well as increase productivity through better knowledge and skills. With access to mobile technology, learning can be delivered to distributed audience, even on the shop-floor. In addition, shift work and access to training in down-time and gaps in production, can also be achieved.BarriersManufacturing is often thought of as a sector not much involved in online learning. Several factors are at work here. 1. Lots of SMEs without large training budgets2. Less likely to find a LMS to deliver content3. Less likely to find L&D aware of online learning3. Less access to devices for online learning4. Practical environment where factory floor training more prevalent.To make online learning work there needs to be more awareness of why online learning can help as well as how it can be done.What we didFirst we focused on basic, generic training needs, and produced dozens of modules on:1. Manual handling2. Health and safety3. General Manufacturing Practice4. Language of manufacturing5. Gas Cylinders6. Product knowledgeThese are largely knowledge-based modules that underpin practical training in the lab, workshop or factory floor. Bringing everyone up to a common standard really helps when it comes to practical, vocational training. You really should understand what is going on with the science of gas storage and use if you handle dangerous gases and want to weld safely. In addition we trained everyone from apprentices and administration staff to sales people.To this end we produced modules quickly and cheaply using WildFire, an AI service that takes any document, PowerPoint or video, and creates online learning in minutes not months. We have done this successfully in finance and healthcare but manufacturing posed different challenges.1. Much of the training is text heavy from manuals without any sophisticated use of images. That we solved through quick and low cost photo-shoots. Literally shooting to a shot list as the online modules had already been created.2. In not one case did we find a LMS (Learning management System), so we had to deliver from the WildFire server. This actually has one great advantage in that it freed us from the limitations of SCORM. We could gather oodles of data for monitoring and analysis.3. Doing [...]



Astonishing fake in education and training - the graph that never was

2018-01-07T19:54:38.143+00:00

I have seen this in presentations by the CEO of a large online learning company, Vice-Chancellor of a University, Deloitte’s Bersin, and in innumerable keynotes and talks over many years. It’s a sure sign that the speaker has no real background in learning theory and is basically winging it. Still a staple in education and training, especially in 'train the trainer' and teaching courses, a quick glance is enough to be suspicious.Dales’s coneThe whole mess has its origins in a book by Edgar Dale way back in 1946. There he listed things from the most abstract to the most concrete: Still pictures, Visual symbols, Verbal symbols, Radio recordings, Motion pictures, Exhibits, Field trips, Demonstrations, Dramatic participation, Contrived experiences, Purposeful experience and Direct. In the second edition (1954) he added Dramatised experiences through Television and in the third edition, and heavily influenced by Bruner (1966), he added enactive, iconic symbolic.But let’s not blame Dale. He admitted that it is NOT based on any research, only a simple intuitive model and he did NOT include any numbers. It was, in fact, simply a gradated model to show the concreteness of different audio-visual media. Dale warned against taking all of this too seriously, as a ranked or hierarchical order. Which is exactly what everyone did. He actually listed the misconceptions in his 1969 third edition p128-134. So the first act of fakery was to take a simple model, ignore its original purpose, and the authors warnings, and use it for other ends.Add fake numbersFirst up, why would anyone with a modicum of sense believe a graph with such rounded numbers? Any study that produces a series of results bang on units of ten would seem highly suspicious to someone with the most basic knowledge of statistics. The answer, of course, is that people are gullible, especially to messages that appeal to their intuitive beliefs, no matter how wrong. The graph almost induces confirmation bias. In any case, these numbers are senseless unless you have a definition of what you mean by learning and the nature of the content. Of course, there was no measurement – the numbers were made up. Add Fake AuthorAt this point the graph has quite simply been sexed up by adding a seemingly genuine citation from an academic and Journal. This is a real paper, about self-generated explanations, but has nothing to with the fake histogram. The lead author of the cited study, Dr. Chi of the University of Pittsburgh, a leading expert on ‘expertise’, when contacted by Will Thalheimer, who uncovered the deception, said, "I don't recognize this graph at all. So the citation is definitely wrong; since it's not my graph." Serious looking histograms can look scientific, especially when supported by bogus academic credentials.Add new categoriesThe fourth bit of fakery was to add ‘teaching others’ to the end, topping it up to, you guessed it – 90%. You can see what’s happening here – flatter teachers and teaching, and they’ll believe anything. They also added the ‘Lecture’ category on at the front – and curiously CD-ROM! In fact, the histogram has appeared in many different forms, simply altered to suit the presenter's point in a book or course. This is from Josh Bersin’s book on Blended Learning. It is easy to see how the meme gets transmitted when consultants tout it around in published books. Bersin was bought by Deloittes. What happens here is that Dales original concept is turned from a description of media into the prescription of methods.The Colored PyramidThe third bit of fakery, was to sex it up with colour and shape, going back to Dale’s pyramid but with the fake numbers and new categories added. It is a cunning switch, to make it look like that other caricature of human nature, Maslow’s hierarchy of needs. It suffers from the same simplistic idiocy that Maslow’s pyramid does – that comp[...]



Is debate around 'bias in AI' driven by human bias? Discuss

2017-12-23T13:16:16.721+00:00

When AI is mentioned it’s only a matter of time before the word ‘bias’ is heard. They seem to go together like ping and pong, especially in debates around AI in education. Yet the discussions are often merely examples of bias themselves – confirmation, negativity and availability baises. There’s little analysis behind the claims. ‘AI programmers are largely white males so all algorithms are biased - patriarchal and racist’ or the commonly uttered phrase ‘All algorithms are biased’. In practice, you see the same few examples being brought up time and time again: black face/gorilla and reoffender software. Most examples have their origin in Cathy O’Neil’s Weapons of Math destruction. More of this later.To be fair AI is for most an invisible force, that part of the iceberg that lies below the surface. AI is many things, can be opaque technically and true causality difficult to trace. So, to unpack this issue it may be wise to look at the premises of the argument, as this is where many of the misconceptions arise.Coders and AIFirst up, the charge that the root cause is male, white coders, AI programmers these days are more likely to be Chinese or Indian than white. AI is a global phenomenon, not confined to the western world. The Chinese government has invested a great deal in these skills through Artificial Intelligence 2.0. The 13th Five-Year Plan (2016-2020), the Made in China 2025 program, Robotics Industry Development Plan and Three-Year Guidance for Internet Plus Artificial Intelligence Plan (2016-2018) are all contributing to boosting AI skills, research and development. India has an education system that sees ‘engineering’ and ‘programming’ as admirable careers and a huge outsourcing software industry with a $150 billion IT export business. Even in Silicon Valley the presence of Asian and Indian programmers is so prevalent that they feature in every sitcom on the subject. Even if the numbers are wrong the idea that coders infect AI with racist code, like the spread of Ebola, is ridiculous. One wouldn’t deny the probable presence of some bias but the idea that it is omnipresent is ridiculous.Gender and AITrue there is a gender differential, and this will continue, as there are gender differences when it comes to focused, attention to detail coding in the higher echelons of AI programming. We know that there is a genetic cause of autism, a constellation (not spectrum), of cognitive traits and that this is heavily weighted towards males. For this reason alone there is likely to be a gender difference in high-performance coding teams for the foreseeable future. In addition, the idea that these coders are unconsciously, or worse, consciously creating racist and sexist algorithms is an exaggeration. One has to work quite hard to do this and to suggest that ALL algorithms are written in this way is another exaggeration. Some may, but most are not.Anthropomorphic bias and AIThe term Artificial Intelligence can in itself be a problem, as the word ‘intelligence’ is a genuinely misleading, anthropomorphic term. AI is not cognitive in any meaningful sense, not conscious and not intelligent other than in the sense that it can perform some very specific tasks well. It may win at Jeopardy, chess and GO but it doesn’t know that it even playing these things, never mind the fact that it has won. Anthropomorphic bias appears to arise from our natural ability to read the minds of others and therefore attribute qualities to computers and software that are not actually there. Behind this basic confusion is the idea that AI is one thing – it is not – it encapsulates 2500 years of mathematics since Euclid put the first algorithm down on papyrus and there are many schools of AI that take radically different approaches. The field is an array of different techniques, often mathematically, quite separate from each other. ALL humans are biased[...]



10 uses for Chatbots in learning (with examples)

2017-12-17T12:24:07.425+00:00

As chatbots become common in other contexts, such as retail, health and finance, so they will become common in learning. Education is always somewhat behind other sectors in considering and adopting technology but adopt it will. There are several points across the learner journey where bots are already being used and already a range of fascinating examples.1.    Onboarding botOnboarding is notoriously fickle. New starters in at different times, have different needs and the old model of a huge dump of knowledge, documents and compliance courses is still all too common. Bots are being used to introduce new students or staff to the people, environment and purpose of the organisation. New starters have predictable questions, so answers can be provided straight to mobile, directed to people, processes or procedures, where necessary. It is not that the chatbot will provide the entire solution but it will take the pressure off and respond to real queries as they arise. Available 24/7 it can give access to answer as well as people. What better way to present your organization as innovative and responsive to the needs of students and staff?2.    FAQ bot In a sense Google is a chatbot. You type something in and up pops a set of ranked links. Increasingly you may even have a short list of more detailed questions you may want to ask. Straight up FAQ chatbots, with a well-defined set of answers to a predictable set of questions can take the load off customer queries, support desks or learner requests. A lot of teaching is admin and a chatbot can relieve that pressure at a very simple level within a definite domain – frequently asked questions. 3. Invisible LMS bot At another level, the invisible LMS, fronted by a chatbot, allows people to ask for help and shifts formal courses into performance support, within the workflow. LearningPool’s ‘Otto’ is a good example. It sits on top of content, accessible from Facebook, Slack and other commonly used social tools. You get help in various forms, such as simple text, chunks of learning, people to contact and links to external resources as and when you need them. Content is no longer sits in a dead repository, waiting on you to sign in or take courses, but is a dynamic resource, available when you ask it something.4. Learner engagement botLearners are often lazy. Students leave essays and assignments to the last minute, learners fail to do pre-work, and courses– it’s a human failing. They need prompting and cajoling. Learner engagement bots do this, with pushed prompts to students and responses to their queries. ‘Differ’ from Norway does precisely this. It recognizes that learners need to be engaged and helped, even pushed through the learning journey, and that is precisely what 'Differ' does.5. Learner support botCampus support bots or course support bots go one stage further and provide teaching support in some detail. The idea is to take the administrative load off the shoulders of teachers and trainers. Response times to emails from faculty to students can be glacial. Learner support bots can, if trained well, respond with accurate and consistent answers quickly, 24/7.The Georgia Tech bot Jill Watson, and its descendants, responds in seconds. Indeed they had to slow its response time down to mimic the typing speed of a human. The learners, 350 AI students, didn’t guess that it was a bot and even put it up for a teaching award.6. Tutor botsTutor bots are different from chatbots in terms of the goals, which are explicitly ‘learning’ goals. They retain the qualities of a chatbot, flowing dialogue, tone of voice, exchange and human (like) but focus on the teaching of knowledge and skills. Straight up teaching is another approach, where the bot behaves like a Socratic teacher, asking sprints of questions and providing encouragement and feedback. This typ[...]



7 solid reasons to suppose that chatbot interfaces will work in learning

2017-12-14T14:33:19.103+00:00

In Raphael’s painting various luminaries stand or sit in poses on the steps, but look to the left of Plato and Aristotle and you’ll see a poor looking figure in a green robe talking to people – that’s Socrates. Most technology in teaching has run against the Socratic grain, such as the blackboard, turning teachers into preachers and lecturers. With chatbots we may be seeing the return of the Socratic method.This return is being enabled by AI, in particular Natural Language Processing but also through other AI techniques such as adaptive learning, machine learning, reinforcement learning. AI is largely invisible, but it doe have to reveal itself through its user interface. AI is the new UI but because the AI is doing a lot of the smart, behind the scenes work, it is best fronted by a simple interface, the simpler the better. The messenger interface seems to have won the interface wars, transcending menus and even social media. Simple Socratic dialogue seems to have risen, through the process of natural selection as THE interface of choice, especially on mobile. So can this combination of AI and Socratic UI have an application in learning? There are several reasons for being positive about this type of interface in learning.1. Messaging the new interfaceWe know that messaging, the interface used by chatbots, has overtaken that of social media over the last few years, especially among the young. Look at the mobile home screen of any young person and you’ll see the dominance of chat apps. The Darwinian world of the internet is the perfect testing ground for user interfaces and messaging is what you are most likely to see when looking over the shoulder of a young person.So one could argue that for younger audiences, chatbots are particularly appropriate, as they already use this as their main form of communication. They have certainly led the way in its use but one could also argue that there are plenty of reasons to suppose that most other people like this form of interface.2. FrictionlessEasy to use, it allows you to focus on the message not the medium. The world has drifted towards messaging for the simple reason that it is simple. By reducing the interface to its bare essentials, the learner can focus on the more important task of communications and learning. All interfaces aim to be as frictionless as possible and apart from speculative mind reading from the likes of Elon Musk with Neuralink, this is as bare bones as one can get.3. Reduces cognitive loadMessaging is simple, a radically, stripped down interface that anyone can use. It requires almost no learning and mimics what we all do in real life – simply dialogue. Compared to any other interface it is low on cognitive load. There is little other than a single field into which you type, it therefore goes goes at your pace. What also matters is the degree to which it makes use of NLP (Natural Language Processing) to really understand what you type (or say).4. ChunkingOne of the joys of messaging, and one of the reasons for its success, it that it is succinct. It is by its very nature chunked. If it were not, it wouldn’t work. Imagine being on a flight with someone, you ask them a question and get a1 hour lecture in return or imagine. Chatbots chat, they don’t talk at you.5. Media equationIn a most likely apocryphal story, where Steve Jobs presented the Apple Mac screen to Steve Wosniak, Jobs had programmed it so say ’Hello…”. Wosniak though it uncessary – but who was right? We want our technology to be friendly, easy to use, almost our companion. This is as true on learning as it is in any other area of human endeavour.Nass & Reeves, in The Media Equation, did 35 studies to show that we attribute agency to technology, especially computers. We anthropomorphise technology in such a way that we think the bot is human or at least [...]



Fully Connected by Julia Hobsbawn – I wish I hadn’t

2017-12-14T14:02:10.338+00:00

Having seen Julia get torn to pieces by an audience in Berlin, I decided to give the book a go. But first Berlin. After an excruciating anecdote about her being in the company of Royalty in St James Palace and meeting Zak Goldsmith (it made no sense, other than name dropping), she laid out the ideas in her book describing networks as including Facebook, Ebola and Zika –all basically the same thing, a ridiculous conflation of ideas. “All this social media is turning us into sheep” she bleated. Then asked “How many of you feel unhappy in your jobs?” Zero hands went up. Oh dear, try again. “How many of you feel overloaded?” Three hands in a packed room. Ooops that punctured the proposition.... She then made a quick retreat into some ridiculous generalisations about her being the first to really look at networks, that Trump should be thrown off Twitter (strong anti-freedom of expression line here.... bit worrying). Basically playing the therapeutic contrarian. The audience were having none of it, many of them experts in this field.Then came the blowback. Stephen Downes, who knows more than most on the subject of networks, was blunt “Everything you’ve said is just wrong” Wow. He then explained that there’s a large literature on networks and that the subject has been studied in depth and that she was low on knowledge and research. He was right. Andrew Keen on Stephen Downes accusation that Hobsbawn was flakey on assumptions and research "Good - glad to see someone with a hard hitting point..." Claire Fox then joined the fray.... pointing out that this contrarian stuff smacks of hysteria – it’s all a bit preachy and mumsy.So, fast forward, I’m back from Berlin and bought the book – Fully Connected. To be fair I wanted to read the work for myself. Turns out the audience were right.  Fully ConnectedThe Preface opens up with a tale about Ebola, setting the whole ‘networks are diseased and I have the cure’ tone of the book. “Culture, diseases, ideas: they’re all about networks” says Hobsbawn. Wow – she’s serious and really does want to conflate these things just to set up the self-help advice. What follows is a series of well-worn stuff about Moore’s Law, Stanley Milgram, Six Degrees of Separation, Taleb’s Black Swan, Tom Peters, Peter Drucker… punctuated by anecdotes about her and her family. It’s a curious mixture of dull, middle-class anecdotes and old school stuff without any real analysis or insights.Ah, but here comes her insight – her new term ‘social health’. All is revealed. Her vision is pathalogocal, the usual deficit view of the modern world. All of you out there are wrapped up in evil spiders’ webs, diseased, and I have the cure. Her two big ideas are The Way to Wellbeing and The Blended Self. All of this is wrapped up in the pseudo-medical nonsense; Information obesity, Time starvation, Techno-spread, Organisational bloat. It’s like a bad diet book where you’re fed a diet of bad metaphors. Her ‘Hexagon’ of social health is the diagnosis and cure, as she puts herself forward as the next Abraham Maslow – replacing the pyramid with a hexagon – we’re networked geddit?Part two is even worse. The usual bromides around Disconnecting, Techno-shabbats, Designing your honeycomb, The knowledge dashboard. Only then do you realise that this is a really bad, self-help book based on a few personal anecdotes and no research whatsoever. The postscript says it all, a rambling piece about the Forth Road Bridge. I grew up in the town beneath that bridge and saw it built – but even I couldn’t see what she was on about. There are some serious writers in this area, like Andrew Keen, Nicholas Carr and others, Julia is not one of them.[...]



Invisible LMS: the LMS is not dead, but it needs to be invisible – front it with a chatbot

2018-01-03T14:05:01.814+00:00

Good is almost invisible. As the most powerful piece of back-end, consumer software ever built, it hides behind a simple letterbox. Most successful interfaces follow this example of Occam’s Razor – the minimum number of entities to reach your goal.Sadly, the LMS does the opposite. Often difficult to access and navigate, it looks like something from the 90s – that’s because it is something from the 90s. The clue is in the middle word ‘management’. The LMS is largely about managing learners and learning, not engagement. But there’s a breakthrough. What we are seeing now are Learning ENGAGEMENT Systems. It is not that the functionality of an LMS is flawed but its UI/UX is most certainly flawed. Basically repositories, the LMS is insensitive to performance support, learning embedded in workflow and makes people do far too much work. They put obstacles in the way of learning and fail the most basic demands for data, as they are trapped in the hideously inadequate SCORM standard.First up - we must stop seeing employees as learners. No one calls anyone a learner in real life, no one sees themselves as learners in real life. People are people, doing a job. It’s why I’m allergic to the ‘lifelong learning’ evangelists who often see life as a lifelong course, or life coaches – get a life, not a coach.So how could we make the LMS more invisible, while retaining and improving functionality?First up get rid of the multiple sign-ons (to be fair most have), nested menus, lists of courses and general noise. Talk to people. When people want to know something they usually ask someone. So front your LMS/VLE with a chat function. Most young people have already switched to messaging, away from email and even traditional social media.This is the real screen of a real person, she’s 19. There isn’t even a browser or phone icon – it’s largely messaging. Dialogue is our most natural form of communication, so front learning with dialogue. A chat interface also dramatically reduces cognitive overload. This is why it is so popular – ease of use and seems natural.Meet OttoOtto, from Learning Pool is the best example I’ve seen of this. Ask a question and either a human or the back-end LMS (now invisible) will respond and find the relevant answer, resource or learning experience. It can access simple text answers, pieces of e-learning and/or external resources. So, when someone comes across something they don’t understand or need to know for whatever reason, they have an opportunity to simply ask and the chatbot will respond, either with a quick answer or a flow of questions that try to pinpoint what you really need. If the system can’t deliver it knows someone who can. It’s not just the LMS that can be made invisible, it’s the whole structure of ‘learning’ – the idea that learning is something separate, done in courses and formal. Training gets a bad rap for a reason – it’s all a bit, well, dull and inflexible. At one point in my life I point blank refused to be in a room with round tables, a flipchart, coloured pens and a bowl of mints for inspiration. The sooner that becomes invisible the better. Book webinar on chatbots in learning here.[...]



The Square and the Tower – networks and hierarchies

2017-12-04T17:19:13.024+00:00

The Square and the Tower by Niall Ferguson takes the public square in Sienna and the tall tower that looms above, as a metaphor for flat, open networks and their accompanying hierarchical structures.My friend Julian Stodd starts his talks with a similar distinction between open, flat networks and formal, hierarchical structures (although both are networks, as a hierarchy is just one form of network). Networks tend to be more creative and innovative, hierarchies more restricted. In most contexts you need both. Ferguson’s point is that history shows that both have been around for a very long time. Indeed, he tries to rewrite history in terms of these two opposing forces. He sees history through the lens of networks, the main distinction being between disruptive networks, often fuelled by technology, such as tool making (stone axes etc.), language, writing, alphabets, paper, printing, transport, radio, telegraph television and the internet; then institutional hierarchies such as families, political parties, companies and so on. Networks come in all shapes and sizes. In terms of communities we have criminal networks, terrorist networks, jihadi networks, intelligence networks, and so on. In terms of technology, social networks, telephone networks, radio networks, electricity networks. History, he thinks, understates the role of networks. We now even have cyberwars between networks. This is age of networks.Technologies and networksWe can trace this back to the fact that we are a species that has evolved to ‘network’. Our brains are adapted towards social interaction and groups. We, the co-operative ape, have distributed cognition and this has increased massively as technology has allowed us to network more widely. Technologies have been the primary catalysts. Nevertheless, much human behaviour has been tempered with Chiefs, Kings, Lords, Emperors and so on… hierarchical structures that lead and control, even the web is now spun by hierarchical and rapacious spiders – the giant tech companies. His analysis of Europe’s failure is interesting here, as we have Apple, Google, Facebook, Amazon, Microsoft and Netflix in the US, and Baidu, Alibaba and TenCent in China. Europe merely regulates. These Oligopolies, dominate the networks.The study of networks goes back to Euler’s seven bridges problem with a more fulsome look at nodes, edges, hubs and clusters. What is clear is that networks are rarely open and low density. They collapse into clusters and tribes. This in itself still produces, not so much six degrees of separation (actually closer to five), as 3.57 if you are on Facebook. There is an attempt to identify common features of networks; No man is an island, Birds of a feather flock together, Weak tis are strong, Structure determines virology, Networks never sleep, Networks network, The rich get richer.Then, by example, he takes some deeper dives into the Medicis, as he regards the Renaissance as the first of the truly networked ages. Then the age of discovery, the catalysts being navigational technology and trade networks. But the big disruptive network was the Reformation, partially caused by printing. The fact Luther did (or did not) nail his 95 theses to the door is beside the point. What matters is the printing press that allowed the spread of these ideas and freedom of expression to challenge the hierarchy of the church. The control of language through Latin and of knowledge through scripture was blown wide open. From the Reformation came Revolutions, again fuelled by print and networks. In addition financial networks, sometimes ruled by family hierarchies, such as the Rothschilds. Scientific and industrial networks flourished giving us industrial revolutions. Intellectual networks such as Th[...]



Christmas Party shenanigans – let’s fight for the right to paaarty….. and call it a HR-free zone

2017-11-27T15:08:37.835+00:00

The Christmas Party is a small, intense pool of chaos in the corporate year, a licence to misbehave, drink too much, say things you otherwise wouldn’t. Only on the surface is it is a celebration of the company and its achievements for the year. In fact, it is the opposite, a Dionysian release from the Kafkaesque restrictions of HR and hierarchy. It is an opportunity to let rip – be in the company but not subject to its rules. The worst possible venue for the Christmas Party is on company premises. What happens at the party stays at the partyThe Christmas party has little to do with Christmas. Giving out presents would be bizarre, unless they were weirdly satirical. Carols are replaced by party hits. . This is no time to reflect on moral issues but a one a year chance to be amoral, even immoral, if at midnight you’re still capable of discerning the difference. A sure sign of this is the yearly debate about whether partners should be included – usually a charade that ends in their exclusion. Everyone knows that they are the one’s that would dampen the whole affair and encourage people to leave early just as the real fun begins.When I was the CEO of a company I had to rescue a lad who had been caught with cocaine by the staff of the venue. I hadn’t even finished my soup! He was spread-eagled against a wall by the bouncers. Solution? I did a deal with the venue manager to use the same venue for the next year’s party if they let him off. We didn’t sack him – this was a party in Brighton, the town, as Keith Waterhouse once famously said, “that looks as though it has been up all night helping the police with their enquiries”. At another there was a discussion the next day on the sauna trip (famously seedy in Brighton) after the Christmas party where nipple rings, piercings and tattoos had been compared. There were always shenanigans and so it should be.My friend Julian Stodd tells the story of two people being sacked because they posted images of them getting drunk and throwing up at their Christmas Party. The American CEO has got wind of this (why he’d be interested is beyond me) and had taken action, bringing the full force of HR bureaucracy down upon them. This is pathetic. It’s as pathetic as searching through Facebook to find what a potential employee did when they were a teenager. HR has no business being judge and jury, unless something has caused harm to others. The Christmas Party, in particular, is a no-go zone for that sort of bullshit.Tales of Christmas Parties Past become part of an organisation’s folklore. The planning needs clear execution but everyone knows that the aim is to organise an event that gradually descends into chaos. We have as a species always celebrated through feasts and drinking. Long may it continue in work. It’s the perfect opportunity to put the middle finger up to company values, not that anyone pays attention to them anyway, especially those idiotic acronyms, where the words have clearly been invented to fit the letters of the word or lists of abstract nouns all starting with the same letter. For example, “ innovation, integrity and i*****… what was that third one again?” People have their own values and HR has no business telling them what their values should be. They’re personal. Most employees will have values and they’ll be leaving your organisation for another at some time, where another set of anodyne words will be put forward as ‘values’. Keep it simple you need only one rule ‘Don’t be a Dick!’. Back to the party - organisations need this Dionysian, release valve, as it vents frustrations, allows simmering relationships to form, people to show their true selves, not playing the usua[...]



Janesville - a town that explains Trump and also why you shouldn't judge or blame people for being poor

2017-11-20T22:17:41.155+00:00

You’re put in a town that implodes when the car plant closes down and 9000 people lose their jobs. GM was a mess – incompetent management, old models, a company that failed to innovate. As if that wasn’t enough Janesville is hit with Biblical levels of rain (climate change?). Journalism at its best, by a Poulitzer-winning writer, written from the perspective of the people affected. Want to know why working America is pissed? Read this book. Told with compassion but realism, through the lives of real people in a real town.For over 100 years they had produced tractors, pick-ups, trucks, artillery shells and cars. Obama came and went, the financial crisis hammered them deeper into the dirt but while the banks were bailed by the state, the state bailed on the people. On top of this a second large, local employer, Parker Pens, outsourced to Mexico but the market for upmarket pens was also dying. The ignominy of being asked to extend your wages by a few weeks by going down to Mexico to train their cheaper labour was downright evil.Then the adjunct businesses started to fail, the suppliers, trades, shops, restaurants, nurseries for two income families – then the mortgage and rent arrears, foreclosures, house prices fall, negative equity. As middle-class jobs go they push down on working lass jobs and the poor get poorer.“Family is more important than GM” this is the line that resonated most with me in the book. In this age of identity politics, most people still see a stable family and their community as their backstops. The left and right have lost focus on this. The community didn’t lie down – they fought for grants, did lots themselves to raise money, help each other – but it was not enough. Grants for retraining were badly targeted, training people for reinvention is difficult for monolithic, manufacturing workforces. Some of it was clearly hopeless, like discredited Learning Style diagnosis, overlong courses of limited relevance to the workplace or practice. Problems included the fact that many couldn’t use computers, so there was huge drop out, more debts and little in the way of workplace learning. Those that did full degrees found that what few jobs there were had been snapped up while they were in college – their wages dropped the most, by nearly half. One thing did surprise me, the curious offshoot that was anti-teacher hostility. People felt let down by a system that doesn’t really seem to work and saw teachers as having great holidays, pensions and healthcare, while they were thrown out of work. The whole separation of educational institutions from workplaces seems odd.Jobs didn’t materialise. What jobs there are, exist in the public sector – in welfare charities and jails. A start-up provided few jobs, many commuted like gypsies to distant factories. Even for those in work, there was a massive squeeze on wages, in some cases a 50% cut, sometimes more. In the end jobs came back but real wages fell. Healthcare starts to become a stretch. But it’s the shame of poverty, using food banks, homeless teenagers and a real-life tragedy 200 pages into the book that really shakes you over.The book ends with the divide between the winners and losers. This is the divide that has shattered America. Janesville is the bit of America tourists, along with East and West coast liberals don’t see. The precariat are good people who are having bad things done to them by a system that shoves money upwards into the pockets of the rich. Looked down upon by Liberals, they are losing faith in politics, employers, the media, even education.Wisconsin turned Republican and Trump was elected. The economist Mark Blyth attributes the Trump win to their wages squeeze and[...]



Jared Lanier: Dawn of the New Everything: A Journey Through Virtual Reality

2017-11-18T12:09:23.539+00:00

As a fan of VR I was looking forward to this book. Lanier is often touted as the inventor, father or, more realistically, the guy who name up with the phrase ‘Virtual Reality’. I’m not sure that any of this is true, and to be fair, he says as much late in the book. The most curious thing about the book is how uninteresting it is on VR – it’s core subject. Lot’s on the early failed stuff, and endless musings on early tech folk, but little that is truly enlightening about contemporary VR.My problem is that it’s overwritten. No, let me rephrase that, it’s self indulgently overwritten. I’ve always liked his aperçus, little insights that make you look at technology from another perspective, such as ‘Digital Maoism’ and ‘Micropayments’ but this is an over-long ramble through an often not very interesting landscape. He has for many years been a gadfly for the big tech companies but the book is written from within that same Silicon Valley bubble. Critical of how Silicon Valley has turned out he's writing for the folk that like this worls and want to feel it's earlypulse.He finds it difficult to move out of that bubble. I’m with him on the ridiculous Kurweil utopianism but when Lanier moves out into philosophy, or areas such as AI, it’s all a bit hippy dippy. On AI there’s a rather ridiculous attempt at a sort of Platonic dialogue that starts with a category mistake VR = -AI. No – they are two entirely different things, albeit with connections. Although interesting to describe AI as a religion (some truth in this) as it has it has transhuman aspects, it’s a superficially clever comment without any accompanying depth of analysis. I was disappointed. You Are Not A Gadget was an enlightening book, this is a bit of a shambles.[...]



7 ways to use AI to massively reduce costs in the NHS

2017-12-03T16:51:02.463+00:00

I once met Tony Blair and asked him “Why are you not using technology in learning and health to free it up for everyone, anyplace, anytime?” He replied with an anecdote, “I was in a training centre for the unemployed and did an online module – which I failed. The guy next to me also failed, so I said ‘Don’t worry, it’s OK to fail, you always get another chance…. To which the unemployed man said 'I’m not worried about me failing, I’m unemployed – you’re the Prime Minister!” It was his way of fobbing me off.Nevertheless, 25 years later, he publishes this solid document on the use of technology in policy, especially education and health. It’s full of sound ideas around raising our game through the current wave of AI technology. It forms the basis for a rethink around policy, even the way policy is formulated, through increased engagement with those who are disaffected and direct democracy. Above all, it offers concrete ideas in education, health and a new social contract with the tech giants to move the UK forward.In healthcare, given the challenges of a rising and ageing population, the focus should be on increasing productivity in the NHS. To see all solutions in terms of increasing spend is to stumble  blindly onto a never-ending escalator of increasing costs. Increasing spend does not necessarily increase productivity, it can, in some cases, decrease productivity. The one thing that can fit the bill, without inflating the bill, is technology, AI in particular. So how can AI can increase productivity in healthcare:1. Prevention2. Presentation3. Investigation4. Diagnosis5. Treatment6. Care7. Training1. PreventionPersonal devices have taken data gathering down to the level of the individual. It wasn’t long ago that we knew far more about our car than our own bodies. Now we can measure signs, critically, across time. Lifestyle changes can have a significant effect on the big killers, heart disease, cancer and diabetes. Nudge devices, providing the individual with data on lifestyle – especially exercise and diet, is now possible. Linked to personal accounts online, personalised prevention could do exactly what Amazon and Netflix do by nudging patients towards desired outcomes. In addition targeted AI-driven advertising campaigns could also have an effect. Public health initiatives should be digital by default.2. PresentationAccident and Emergency can quickly turn in to a war zone, especially when General Practice becomes difficult to access. This pushes up costs. The trick is to lower demand and costs at the front end, in General Practice. First, GPs must adopt technology such as email, texting and Skype for selected patients. There is a double dividend here, as this increases productivity at work, as millions need not take time off work to travel to a clinic, sit in a waiting room and get back home or to work. This is a particular problem for the disabled, mentally ill and those that live far from a surgery. Remote consultation also means less need for expensive real estate – especially in cities. Several components of presentation are now possible online; talking to the patient, visual examination, even high definition images from mobile for dermatological investigation. As personal medical kits become available, more data can be gathered on symptoms and signs. Trials show patients love it and successful services are already being offered in the private sector.Beyond the simple GP visit, lies a much bigger prize. I worked with Alan Langlands, the CEO of the NHS, the man who implemented NHS Direct. He was adamant that a massive expansion of NHS Direct was needed but commented th[...]



63% of children entering primary school today will be doing jobs that don’t yet exist & 47% of jobs will be automated... oh yeah...10 reasons why this is bullshit

2017-12-20T13:13:09.285+00:00

63% of children entering primary school today will be doing jobs that don’t yet exist. Tired of seeing this at conferences.... I read and hear it often. The last was from two speakers on the same platform, an 'Internationally Recognized Expert on the Future of Work and the Future of Learning' and a senior bod at LinkedIn, at OEB in Berlin. Let's be clear, this quote is made up - there is no source - it's the touchstone of the charlatan.47% of jobs will be automated... I’ve lost count of the times I’ve seen this mentioned in newspapers, articles and conference slides. It is from a 2013 paper by Frey and Osborne. First, it refers only to the US, and only states that such jobs are under threat. Dig a little deeper and you find that it is a rather speculative piece of work. AI is an ‘idiot savant’, very smart on specific tasks but very stupid and prone to massive error when it goes beyond its narrow domain. This paper errs on the idiot side.They looked at 702 job types then, interestingly, used AI itself (machine learning) which they trained with 70 jobs, judged by humans as being at risk of automation or not. They then trained a ‘classifier’ or software program with this data, to predict the probability of the other 632 jobs in being automated. You can already see the weaknesses. First the human trained data set – get this wrong and it sweeps through the much larger AI generated conclusions. Second, the classifier, even if it is out by a little can make wildly wrong conclusions. The study itself, largely automated by AI, rather than being a credible forecast, is more useful as a study of what can go wrong in AI. Many other similar reports  company in the market parrot these results. To be fair, some are more fine-grained than the Frey and Osborne paper but most suffer from the same basic flaws.Flaw 1: Human fears trumps techThe great flaw is over-egging the headline. The fact that 47% of jobs may be automated makes a great headline but is a lousy piece of analysis. Change does not happen this way. In many jobs the context or culture means that complete automation will not happen quickly. There are human fears and expectations that demand the presence of humans in the workplace. We can automate cars, even airplanes, but it will be a long time before airplanes will fly across the Atlantic with several hundred passengers and no pilot. There are human perceptions that, even if irrational, have to be overcome. We may have automated waiters that trolley food to your table but the expectation that a real person will deliver the food and engage with you is all too real.  Flaw 2: Institutional inertia trumps techOrganisations grow around people and are run by people. These people build systems, processes, budget plans and funding processes that do not necessarily quickly lead to productivity gains through automation. They often protect people, products and processes that put a brake on automation. Most organisations have an ecosystem that makes change difficult – poor forecasting, no room for innovation, arcane procurement and sclerotic regulations. This all militates against innovative change. Even when faced with something that saves a huge amount of time and cost, there is a tendency to stick to existing practice. As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”Flaw 3: Low labour costsWhat is often forgotten in such analyses is the business case and labour supply context. Automation will not happen where the investment cost is higher than hiring human labour, and is less [...]



EdTech – all ‘tech’ and no ‘ed’ – why it leads to mosquito projects that die….

2017-11-04T16:23:14.133+00:00

‘EdTech’ is one of those words that make me squirm, even though I’ve spent 35 years running, advising, raising investments, blogging and speaking in this space. Sure it gives the veneer of high-tech, silicon-valley thinking, that attracts investment… but it’s the wrong word. It skews the market towards convoluted mosquito projects that quickly die. Let me explain why.Ignores huge part of marketTechnology or computer based learning long pre-dated the term EdTech. In fact the computer-based learning industry cut its teeth, not in ‘education’, but in corporate based training. This is where the big LMSs developed, where e-learning, scenario-based learning and simulation grew. The ‘Ed’ in ‘Ed-tech’ suggests that ‘education’ is where all the action and innovation sits – which is far from true.Skews investmentThe word EdTech also skews investment. Angels, VCs, incubators, accelerators and funds talk about EdTech in the sense of schools and Universities – yet these are two of the most difficult, and unpredictable, markets in learning. Schools are nationally defined through regulation, curricula and accreditation. They are difficult to sell to as they have relatively low budgets. Universities are as difficult, with a strong anti-corporate ethos and difficult selling environment. EdTech wrongly shifts the centre of gravity away from learning towards ‘schooling’.Not innovativeI’m tired of seeing childish and, to be honest badly designed, ‘game apps’ in learning. It’s the first port of call for the people who are all ‘tech’ and no ‘ed’. It wouldn’t be so bad if they really were games' players or games' designers but most are outsiders who end up making poor games that no one plays. Or yet another ‘social’ platform falling for the old social constructivist argument that people only learn in social environments. EdTech in this sense is far from innovative; it’s innocuous, even inane. Innovation is only innovation if it is sustainable. EdTech has far too many unsustainable models – fads dressed up as learning tools and services.Mosquitos not turtlesLet’s start with a distinction. First, there’s what I call MOSQUITO projects, that sound buzzy but lack leadership, real substance, scalability and sustainability. They’re short-lived, and often die as soon as the funding runs out or paper/report is published. These are your EU projects, many grant projects…. Then there’s TURTLES, sometimes duller but with substance, scalability and sustainability, and they’re long-lived. These are the businesses or services/tools that thrive.Crossing that famous chasm from mosquito to turtle requires some characteristics that are often missing in seed investment and public sector funding in the education market. Too many projects fail to cross the chasm as they lack the four Ss.:Senior management teamSales and marketingScalabilitySustainabilityThere are two dangers here. First, understimulating the market so that the mosquito projects fall into the gap as they fail to find customers and revenues. This is rarely to do with a lack of technical or coding skills but far more often a paucity of management, sales and marketing skills. There’s another danger and that’s bogging projects down in overlong academic research, where one must go at the glacial speed of the academic year and ponderous evaluation, and not the market. These projects lose momentum, focus and, in any case, no one pays much attention to the results. As the old saying goes, “When you want to move a graveyard, don’t expect much help from the occupants.”E[...]



7 reasons to abandon multiple-choice questions

2017-11-23T10:04:43.124+00:00

Long the staple of e-learning and a huge range of low and high stakes tests, the MCQ should be laid to rest or at least used sparingly. It has several flaws…1. Probability25% of getting it right on the standard four-option item, makes it less than taxing. True false, really a two option MCQ, is of course worse.2. UnrealYou rarely, if ever in the real world, have to choose things from short lists. This makes the test item somewhat odd and artificial, disassociated from reality.  They seem dissonant, as this is not the way our brains work (we don't cognitively select from four item lists). They are also weak on and therefore weak on the transfer of knowledge to the real world.3. Distractors distractIt is too easy to remember the distractor, as opposed to the right answer. The fact that they are designed to distract makes them candidates for retention and so MCQs can become counterproductive.4. Can be cheatedPick longest item, second-guess the designer. Look for opposites and internal logic of distractor options. There are credible cheat-lists for multiple choice – Pounstone’s research shows that these approaches increase your chance of getting better scores. (20 cheats here)5. Surface misleadingTake these two questions.What is the Capital of Lithuania? Tallin, Vilnius, Riga. MinskWhat is the Capital of Lithuania? Berlin, Vilnius, Warsaw, HelsinkiSurface differences in options make these very different test items. And it is easy to introduce these surface differences, reducing the validity of the test items and test.6. Difficult to writeI have written a ton of MCQs over 35 years – believe me they are seriously difficult to write. It is easy to select the noun from the text and come up with three other nouns. What is difficult it to test, is real understanding.7. Little effortThis is the big one. As Roediger and McDaniel state in their book Make It Stick, choosing from a list requires little cognitive effort. You choose from a limited set of options, and do not use effortful recall (which is in itself increases retention).Conclusion Multiple-choice is not a terrible test item but it has had its day as the primary test item in online learning. We’re still designing test items in lock-step because the tools encourage us to do so, ignoring more powerful open-response questions that require recall and the powerful act of writing/typing, which in itself, reinforces learning. New tools, such as WildFire, the AI-driven content creation service, focuses on open-response and effortful learning, for all of these seven reasons and more. The learner has to make more effort as that means deeper processing and higher retention.[...]



Kirkpatrick evaluation: kill it - happy sheet nonsense, well past its sell-by-date

2018-02-02T11:59:36.865+00:00

Kirkpatrick has for decades been the only game in town in the evaluation of corporate training, although hardly known in education. In his early Techniques for evaluation training programmes (1959) and Evaluating training programmes: The four levels (1994), he proposed a standard approach to the evaluation of training that became a de facto standard. It is a simple and sensible schema but has not stood the test of time. First up - what are the Kirkpatrick's four levels of evaluation?Four levels of evaluationLevel 1 ReactionAt reaction level one asks learners, usually through ‘happy sheets’ to comment on the adequacy of the training, the approach and perceived relevance. The goal at this stage is to simply identify glaring problems. It is not, to determine whether the training worked.Level 2 LearningThe learning level is more formal, requiring a pre- and post-tests. This allows you to identify those who had existing knowledge, as well as those at the end who missed key learning points. It is designed to determine whether the learners actually acquired the identified knowledge and skills.Level 3 BehaviourAt the behavioural level, you measure the transfer of the learning to the job. This may need a mix of questionnaires and interviews with the learners, their peers and their managers. Observation of the trainee on the job is also often necessary. It can include an immediate evaluation after the training and a follow-up after a couple of months. Level 4 ResultsThe results level looks at improvement in the organisation. This can take the form of a return on investment (ROI) evaluation. The costs, benefits and payback period are fully evaluated in relation to the training deliverables. JJ Phillips has argued for the addition of a separate, fifth, "Return on Investment (ROI)” level which is essentially about comparing the fourth level of the standard model to the overall costs of training. However, it is not that ROI is a separate level as it can be included in Level 4. Kaufman has argued that it is merely another internal measure and that if there were a fifth level it should be external validation from clients, customers and society. In fact there have been other evalutaion methods with even more levels, completely over-engineering the solution.CriticismLevel 1 - keep 'em happyTraci Sitzmann’s meta-studies (68,245 trainees, 354 research reports) ask ‘Do satisfied students learn more than dissatisfied students?’ and ’Are self-assessments of knowledge accurate?’ Self-assessment is only moderately related to learning. Self-assessment captures motivation and satisfaction, not actual knowledge levels.She recommends that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.So Favourable reactions on happy sheets do not guarantee that the learners have learnt anything, so one has to be careful with these results. This data merely measures opinion. Learners can be happy and stupid. One can express satisfaction with a learning experience yet still have failed to learn. For example, you may have enjoyed the experience just because the trainer told good jokes and kept them amused. Conversely, learning can occur and job performance improve, even though the participants thought the training was a waste of time. Learners often learn under duress, through failure or through experiences which, although difficult at the time, prove to be use[...]



Gagne's 9 dull Commandments - why they cripple learning design...

2017-10-24T08:44:37.654+00:00

50 year old theoryIt is over 50 years since Gagne, a closet behaviourist, published The Conditions of Learning (1965). In 1968 we got his article Learning Hierarchies, then Domains of Learning in 1972. Gagne’s theory has five categories of learning; Intellectual Skills, Cognitive strategies, Verbal information, Motor skills and Attitudes. OK, I quite like these – better than the oft-quoted Bloom trilogy (1956). Then something horrible happened.Nine CommandmentsHe claimed to have found the Nine Commandments of learning. A single method of instruction that applies to all five categories of learning, the secret code for divine instructional design. Follow the linear recipe and learning will surely follow.1 Gaining attention2 Stating the objective3 Stimulating recall of prior learning4 Presenting the stimulus5 Providing learning guidance6 Eliciting performance7 Providing feedback8 Assessing performance9 Enhancing retention and transfer to other contextsInstructional designers often quote Gagne, and these nine steps in proposals for e-learning and other training courses, but let me present an alternative version of this list:1 Gaining attentionNormally an overlong animation, coporate intro or dull talking head, rarely an engaging interactive event. You need to grab attention not make the learner sit back in their chair and mind.2 Stating the objectiveNow bore the learner stupid with a list of learning objectives (really trainerspeak). Give the plot away and remind them of how really boring this course is going to be.3 Stimulating recall of prior learningCan you think of the last time you considered the details of the Data Protection Act?4 Presenting the stimulusIs this a behaviourist I see before me? Yip. Click on Mary, Abdul or Nigel to see wht they think of te data Protection Act - cue speech bubble... or worse some awful game where you collect coins or play the role of Sherlock Holmes....5 Providing learning guidanceWe’ve finally got to some content.6 Eliciting performanceTrue/False or Multiple-choice questions each with at least one really stupid option (cheat list for MC here).7 Providing feedbackYes/no, right/wrong, correct/incorrect…try again.8 Assessing performanceUse your short-term memory to choose options in the final multiple-choice quiz.9 Enhancing retention and transfer to other contextsNever happens! The course ends here, you’re on your own mate….Banal and dullFirst, much of this is banal – get their attention, elicit performance, give feedback, assess. It’s also an instructional ladder that leads straight to Dullsville, a straightjacket that strips away any sense of build and wonder, almost guaranteed to bore more than enlighten. What other form of presentation would give the game away at the start. Would you go to the cinema and expect to hear the objectives of the film before you start?It’s time we moved on from this old and now dated theory using what we’ve learnt about the brain and the clever use of media. We have AI-driven approaches such as WildFire and CogBooks that personalise learning.....And don’t get me started on Maslow, Mager or Kirkpatrick![...]



AI-driven tool produces high quality online learning for global company in days not months

2017-11-02T14:59:27.532+00:00

You have a target of a two thousand apprentices by 2020, have a sizeable £2 million plus pot from the Apprenticeship Levy. This money has to, by law, be spent on training. The Head of Apprenticeships in this Global company is a savvy manager and they already have a track record in the delivery of online learning. So they decided to deliver a large portion of that training using online learning.Blended LearningOur first task was to identify what was most useful in the context of Blended Learning. It is important to remember that Blended Learning is not Blended TEACHING. The idea is to analyse the types of learning, types of learners, context and resources to identify your optimal blend, not just a bit of classroom, a bit of online stuff, stick them together like Velcro, and call it ‘blended’. In this case the company will be training a wide range of apprentices over the coming years, a major part of their recruitment strategy, important to the company and the young people joining the company.LearningThe apprentice ‘frameworks’ identify knowledge, behaviours and competences as the three desired types of learning and all of these have to be assessed. The first project, therefore, looked at the ‘knowledge’ component. This was substantial as few new apprentices have much in the way knowledge in this sector. Behaviours and competences need to be primed and supported by underlying knowledge.AssessmentAdditionally, assessment matters in apprenticeships, both formatively, as the apprentices progress, and summatively, at the end. Assessment is a big deal as funding, and the successful attainment of the apprentice, depends on objective and external assessment. It can’t be fudged.ContextThese young apprentices will be widely distributed in retail outlets and other locations, here and abroad. They may also work weekends and shifts. One of our goals was to provide training where and when it was needed, on-demand, at times when workload was low. Content, Level 3 and Level 2, had to be available 24/7, on a range of devices, as tablets were widespread and mobile increasingly popular.SolutionWildFire was chosen, as it could produce powerful online content that is: Highly retentiveAligned with assessmentDeliverable on all devicesQuick to produceLow costUsing an AI-driven content creation tool, we produced 158 modules (60 hours of learning), in days not months. After producing Level 3, we could quickly produce the Level 2 courses and load them up to the LMS for tracking user performance. The learner uses high-retention, open input, rather than weak multiple choice questions. The AI-driven content creation tool not only produced the high quality, online content quickly, it produced links out to additional supplementary content that proved extremely useful in terms of further learning. It only accepts completion when 100% competence is achieved and the learner has to persevere in a module until that is achieved.ConclusionThe team, both the commissioning manager and the project manager were really up for this. First the use of new AI-driven tech excited them. Second, the process truned out to be quick and relatively hassle free. We produced so much content so quickly that it ran ahead of the organiation's ability to test it! Nevertheless, we got there, met very tight deadlines and came out the other side feeling that this really was a game changer. Three, we were all proud of the output. It's great working with a project manager who sees problems as simply thin[...]



Is there one book you’d recommend as an introduction to AI? Yes. Android Dreams by Toby Walsh

2017-10-14T12:36:03.457+00:00

Although there are books galore on AI, from technical textbooks to potboilers, few are actually readable. Nick Bostrom’s ‘Superintelligence’ is dense and needed a good edit, ‘The Future of the Professions’ too dense, ‘The Rise of the Robots’ good but a bit dated, and lacks depth, and ‘Weapons of Math Destruction’ a one-sided and exaggerated contrarian tract. At last there’s an answer to that question “Is there one book you’d recommend as an introduction to AI?” That book is Android Dreams by Toby Walsh.I met Toby Walsh in Berlin and he’s measured and a serious researcher in AI. So I was looking forward to this book and wasn’t disappointed. The book, like the man, is neither too utopian nor dystopian. He rightly describes AI as an IDIOT SAVANT, and this sets the tone for the whole book. In general, you could identify his position on AI, as overestimated in the short-term, underestimated in the long-term. He sees AI as having some limitations and that progress in robotics, and even the much lauded deep learning, have their Achille’s heels – back-propagation being one.On ethics he focuses not on the surface criticisms about algorithmic bias but on whether weaponised AI is a threat – it is – and it’s terrifying. Loved it when he skewered the Frey & Osborne Oxford report on the idea that 47% of jobs are at threat from AI. He explains why they got so many things wrong by going through a series of job types, explaining why robots will not be cutting your hair or serving your food in restaurants any time soon. He also takes a healthy potshot at academics and teachers who think that everyone else’s jobs are at risk, except their own. The book has all the hallmarks of being written by an expert in the field with none of the usual exaggeration or ill-informed negativity, that many commentators have when it comes to AI. AI is not one thing, it is many things, he explains that well. AI can be used for good as well as evil, he explains that well. AI is probably the most important tech development since language, writing and printing – he explains that well. Worth reading, if only for some of his speculative predictions – driverless cars, doctor will be your computer, Marilyn Monroe back in the movies, computer recruitment, talking to rooms,  AI/robot sports, ghost ships, planes and trains, TV news made without humans, personal bot that lives on after you die. This review was partly written using AI. Really.[...]



AI on land, sea, air (space) & cyberspace – it’s truly terrifying

2017-09-26T13:34:59.206+00:00

Vladamir Putin, announced, to an audience of one million online, that, “Artificial intelligence is the future, not only for Russia, but for all humankind…  It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world… If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today.” Elon Musk, tweeted a reply, “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo”, then, “May be initiated not by the country leaders, but one of the AI's, if it decides that a pre-emptive strike is most probable path to victory.”That pretty much sums up the problem. Large and even small nations, even terrorist groups, may soon have the ability to use ‘smart’, autonomous AI-driven tech in warfare. To be honest, it doesn’t have to be that smart. A mobile device, a drone and explosives are all that one needs to deliver a lethal device from a distance. You may even have left the country when it takes off and delivers its deadly payload. Here’s the rub – sharing may be the last thing we want to do. The problem with sharing, is that anyone can benefit.In truth, AI has long been part of the war game. Turing, the father of AI, used it to crack German codes, and thankfully contributed to ending the second World War and let’s not imagine that it has been dormant for the last half a century. The landmine, essentially, a dormant robot that acts autonomously, has been in use since the 17th century. One way to imagine the future is to extend the concept of the landmine. What we now face are autonomous, small landmines, armed with deadly force on land, sea, air and even space.AI is already a major force in intelligence, security and in the theatre of war. AI exists in all war zones, on all four fronts – land, sea, air (space) and cyberspace.AI on landRobot soldiers are with us. You can watch Boston Analytics videos on YouTube and see machines that match humans in some, not all, aspects of carrying, shooting and fighting. The era of the AI-driven robot soldier is here. We have to be careful here, as the cognitive side of soldiering is far from being achieved.Nevertheless, in the DMZ between South and North Korea, robot guard are armed with and will shoot on sight. Known as a Lethal Autonomous Weapons System (LAWS) it will shoot on sight, and by sight we mean infrared detection and laser identification and tracking of a target. It has an AI-driven voice recognition system, asks for identification, and can shoot autonomously. This is a seriously scary development as they are already mounted on tanks. You can see why these sentry or rapid response systems have become autonomous. Humans are far too slow in detecting in-coming attacks or targeting with enough accuracy. Many guns are now targeted automatically with sensors and systems way beyond the capabilities of any human.AI at seaLethal Autonomous Weapons can already operate on or beneath the sea. Naval mines (let’s call them autonomous robots) have been in operation for centuries. Unmanned submarines have been around for decades and have been used for purposes good and bad, for example, the deliver of drugs using autonomous GPS navigation, as well as finding aircraft that have downed in mid-ocean. In[...]



ResearchEd - 1000 teachers turn up on a Saturday for grassroots event....

2017-09-14T18:20:10.346+00:00

Way back I wrote a piece on awful INSET days and how inadequate they were on CPD, often promulgating half-baked myths and fads. Organisations don’t, these days, throw their customers out of the door for an entire day of training. The cost/load on parents in terms of childcare is significant. Kids lose about a week of schooling a year. There is no convincing research evidence that INSET days have any beneficial effects. Many are hotchpotches of non-empirical training. Many (not all) are ill-planned, dull and irrelevant. So here’s an alternative.ResearchED is a welcome antidote. A thousand teachers rock up to spend their Saturday, with 100 speakers (none of whom are paid), to a school in the East End of London, to share their knowledge and experiences. What’s not to like? This is as grassroots as it gets. No gun to the head by the head, just folk who want to be there – most as keen as mustard. They get detailed talks and discussions on a massive range of topics but above all it tries to build on an evidence-based approach to teaching and learning.Judging from some on Twitter, conspiracy theories abound that Tom Bennett, its founder, is a bounder, in the pocket of…. well someone or another. Truth is that this event is run on a shoestring, and there’s no strings attached to what minimal sponsorship there is to host the event. It’s refreshingly free from the usual forced feel of quango-led events or large conferences or festivals of education. Set in a school, with pupils as volunteers, even a band playing soul numbers, it felt real. And Tom walks the floor – I’m sure, in the end, he talked to every single person that day.Tom invited me to speak about AI and technology, hardly a ‘trad’ topic. I did, to a full house, with standing room only. Why? Education may be a slow learner but young teachers are keen to learn about research, examples and what’s new. Pedro de Bruykere was there from Belgium to give an opposing view, with some solid research on the use of technology in education. It was all good. Nobody got precious.But most of the sessions were on nuts and bolts issues, such as behaviour, teaching practice and assessment. For example, Daisy Christodoulou gave a brilliant and detailed talk on assessment, first demolishing four distorting factors but also gave practical advice to teachers on alternatives. I can’t believe that any teacher would walk out of that talk without reflecting deeply on their own attitudes towards assessment and practice.What was interesting for me, was the lack of the usual ‘teachers always know best’ attitudes. You know, that defensive pose, that it’s all about practice and that theory and evidence don’t matter, which simply begs the question, what practice? People were there to learn, to see what’s new, not to be defensive.Even more important was Tom’s exhortation at the end to share – I have already done two podcasts on the experience, got several emails and Twitter was twittering away like fury. He asked that people go back to school – talk, write, blog… whatever… so that’s what I’ve done here. Give it a go – you will rarely learn more in a single day – isn’t that what this is all about?[...]