Subscribe: Donald Clark Plan B
http://donaldclarkplanb.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
book  good  human  it’s  jobs  learning  level  networks  people  research  tech  technology  time  training  work   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Donald Clark Plan B

Donald Clark Plan B



What is Plan B? Not Plan A!



Updated: 2017-12-15T11:53:37.029+00:00

 



7 solid reasons to suppose that chatbot interfaces will work in learning

2017-12-14T14:33:19.103+00:00

In Raphael’s painting various luminaries stand or sit in poses on the steps, but look to the left of Plato and Aristotle and you’ll see a poor looking figure in a green robe talking to people – that’s Socrates. Most technology in teaching has run against the Socratic grain, such as the blackboard, turning teachers into preachers and lecturers. With chatbots we may be seeing the return of the Socratic method.This return is being enabled by AI, in particular Natural Language Processing but also through other AI techniques such as adaptive learning, machine learning, reinforcement learning. AI is largely invisible, but it doe have to reveal itself through its user interface. AI is the new UI but because the AI is doing a lot of the smart, behind the scenes work, it is best fronted by a simple interface, the simpler the better. The messenger interface seems to have won the interface wars, transcending menus and even social media. Simple Socratic dialogue seems to have risen, through the process of natural selection as THE interface of choice, especially on mobile. So can this combination of AI and Socratic UI have an application in learning? There are several reasons for being positive about this type of interface in learning.1. Messaging the new interfaceWe know that messaging, the interface used by chatbots, has overtaken that of social media over the last few years, especially among the young. Look at the mobile home screen of any young person and you’ll see the dominance of chat apps. The Darwinian world of the internet is the perfect testing ground for user interfaces and messaging is what you are most likely to see when looking over the shoulder of a young person.So one could argue that for younger audiences, chatbots are particularly appropriate, as they already use this as their main form of communication. They have certainly led the way in its use but one could also argue that there are plenty of reasons to suppose that most other people like this form of interface.2. FrictionlessEasy to use, it allows you to focus on the message not the medium. The world has drifted towards messaging for the simple reason that it is simple. By reducing the interface to its bare essentials, the learner can focus on the more important task of communications and learning. All interfaces aim to be as frictionless as possible and apart from speculative mind reading from the likes of Elon Musk with Neuralink, this is as bare bones as one can get.3. Reduces cognitive loadMessaging is simple, a radically, stripped down interface that anyone can use. It requires almost no learning and mimics what we all do in real life – simply dialogue. Compared to any other interface it is low on cognitive load. There is little other than a single field into which you type, it therefore goes goes at your pace. What also matters is the degree to which it makes use of NLP (Natural Language Processing) to really understand what you type (or say).4. ChunkingOne of the joys of messaging, and one of the reasons for its success, it that it is succinct. It is by its very nature chunked. If it were not, it wouldn’t work. Imagine being on a flight with someone, you ask them a question and get a1 hour lecture in return or imagine. Chatbots chat, they don’t talk at you.5. Media equationIn a most likely apocryphal story, where Steve Jobs presented the Apple Mac screen to Steve Wosniak, Jobs had programmed it so say ’Hello…”. Wosniak though it uncessary – but who was right? We want our technology to be friendly, easy to use, almost our companion. This is as true on learning as it is in any other area of human endeavour.Nass & Reeves, in The Media Equation, did 35 studies to show that we attribute agency to technology, especially computers. We anthropomorphise technology in such a way that we think the bot is human or at least exhibits human attributes. Our faculty of imagination finds this easy, as witnessed by our ready ability to suspend belief in the movies or when watching TV. It takes seconds and works in our favour with chatbots, as di[...]



Fully Connected by Julia Hobsbawn – I wish I hadn’t

2017-12-14T14:02:10.338+00:00

Having seen Julia get torn to pieces by an audience in Berlin, I decided to give the book a go. But first Berlin. After an excruciating anecdote about her being in the company of Royalty in St James Palace and meeting Zak Goldsmith (it made no sense, other than name dropping), she laid out the ideas in her book describing networks as including Facebook, Ebola and Zika –all basically the same thing, a ridiculous conflation of ideas. “All this social media is turning us into sheep” she bleated. Then asked “How many of you feel unhappy in your jobs?” Zero hands went up. Oh dear, try again. “How many of you feel overloaded?” Three hands in a packed room. Ooops that punctured the proposition.... She then made a quick retreat into some ridiculous generalisations about her being the first to really look at networks, that Trump should be thrown off Twitter (strong anti-freedom of expression line here.... bit worrying). Basically playing the therapeutic contrarian. The audience were having none of it, many of them experts in this field.Then came the blowback. Stephen Downes, who knows more than most on the subject of networks, was blunt “Everything you’ve said is just wrong” Wow. He then explained that there’s a large literature on networks and that the subject has been studied in depth and that she was low on knowledge and research. He was right. Andrew Keen on Stephen Downes accusation that Hobsbawn was flakey on assumptions and research "Good - glad to see someone with a hard hitting point..." Claire Fox then joined the fray.... pointing out that this contrarian stuff smacks of hysteria – it’s all a bit preachy and mumsy.So, fast forward, I’m back from Berlin and bought the book – Fully Connected. To be fair I wanted to read the work for myself. Turns out the audience were right.  Fully ConnectedThe Preface opens up with a tale about Ebola, setting the whole ‘networks are diseased and I have the cure’ tone of the book. “Culture, diseases, ideas: they’re all about networks” says Hobsbawn. Wow – she’s serious and really does want to conflate these things just to set up the self-help advice. What follows is a series of well-worn stuff about Moore’s Law, Stanley Milgram, Six Degrees of Separation, Taleb’s Black Swan, Tom Peters, Peter Drucker… punctuated by anecdotes about her and her family. It’s a curious mixture of dull, middle-class anecdotes and old school stuff without any real analysis or insights.Ah, but here comes her insight – her new term ‘social health’. All is revealed. Her vision is pathalogocal, the usual deficit view of the modern world. All of you out there are wrapped up in evil spiders’ webs, diseased, and I have the cure. Her two big ideas are The Way to Wellbeing and The Blended Self. All of this is wrapped up in the pseudo-medical nonsense; Information obesity, Time starvation, Techno-spread, Organisational bloat. It’s like a bad diet book where you’re fed a diet of bad metaphors. Her ‘Hexagon’ of social health is the diagnosis and cure, as she puts herself forward as the next Abraham Maslow – replacing the pyramid with a hexagon – we’re networked geddit?Part two is even worse. The usual bromides around Disconnecting, Techno-shabbats, Designing your honeycomb, The knowledge dashboard. Only then do you realise that this is a really bad, self-help book based on a few personal anecdotes and no research whatsoever. The postscript says it all, a rambling piece about the Forth Road Bridge. I grew up in the town beneath that bridge and saw it built – but even I couldn’t see what she was on about. There are some serious writers in this area, like Andrew Keen, Nicholas Carr and others, Julia is not one of them.[...]



Invisible LMS: the LMS is not dead, but it needs to be invisible – front it with a chatbot

2017-12-12T13:53:25.502+00:00

Good is almost invisible. As the most powerful piece of back-end, consumer software ever built, it hides behind a simple letterbox. Most successful interfaces follow this example of Occam’s Razor – the minimum number of entities to reach your goal.Sadly, the LMS does the opposite. Often difficult to access and navigate, it looks like something from the 90s – that’s because it is something from the 90s. The clue is in the middle word ‘management’. The LMS is largely about managing learners and learning, not engagement. But there’s a breakthrough. What we are seeing now are Learning ENGAGEMENT Systems. It is not that the functionality of an LMS is flawed but its UI/UX is most certainly flawed. Basically repositories, the LMS is insensitive to performance support, learning embedded in workflow and makes people do far too much work. They put obstacles in the way of learning and fail the most basic demands for data, as they are trapped in the hideously inadequate SCORM standard.First up - we must stop seeing employees as learners. No one calls anyone a learner in real life, no one sees themselves as learners in real life. People are people, doing a job. It’s why I’m allergic to the ‘lifelong learning’ evangelists who often see life as a lifelong course, or life coaches – get a life, not a coach.So how could we make the LMS more invisible, while retaining and improving functionality?First up get rid of the multiple sign-ons (to be fair most have), nested menus, lists of courses and general noise. Talk to people. When people want to know something they usually ask someone. So front your LMS/VLE with a chat function. Most young people have already switched to messaging, away from email and even traditional social media.This is the real screen of a real person, she’s 19. There isn’t even a browser or phone icon – it’s largely messaging. Dialogue is our most natural form of communication, so front learning with dialogue. A chat interface also dramatically reduces cognitive overload. This is why it is so popular – ease of use and seems natural.Meet OttoOtto, from Learning Pool is the best example I’ve seen of this. Ask a question and either a human or the back-end LMS (now invisible) will respond and find the relevant answer, resource or learning experience. It can access simple text answers, pieces of e-learning and/or external resources. So, when someone comes across something they don’t understand or need to know for whatever reason, they have an opportunity to simply ask and the chatbot will respond, either with a quick answer or a flow of questions that try to pinpoint what you really need. If the system can’t deliver it knows someone who can. It’s not just the LMS that can be made invisible, it’s the whole structure of ‘learning’ – the idea that learning is something separate, done in courses and formal. Training gets a bad rap for a reason – it’s all a bit, well, dull and inflexible. At one point in my life I point blank refused to be in a room with round tables, a flipchart, coloured pens and a bowl of mints for inspiration. The sooner that becomes invisible the better. Book webinar on chatvbots in learning here.[...]



The Square and the Tower – networks and hierarchies

2017-12-04T17:19:13.024+00:00

The Square and the Tower by Niall Ferguson takes the public square in Sienna and the tall tower that looms above, as a metaphor for flat, open networks and their accompanying hierarchical structures.My friend Julian Stodd starts his talks with a similar distinction between open, flat networks and formal, hierarchical structures (although both are networks, as a hierarchy is just one form of network). Networks tend to be more creative and innovative, hierarchies more restricted. In most contexts you need both. Ferguson’s point is that history shows that both have been around for a very long time. Indeed, he tries to rewrite history in terms of these two opposing forces. He sees history through the lens of networks, the main distinction being between disruptive networks, often fuelled by technology, such as tool making (stone axes etc.), language, writing, alphabets, paper, printing, transport, radio, telegraph television and the internet; then institutional hierarchies such as families, political parties, companies and so on. Networks come in all shapes and sizes. In terms of communities we have criminal networks, terrorist networks, jihadi networks, intelligence networks, and so on. In terms of technology, social networks, telephone networks, radio networks, electricity networks. History, he thinks, understates the role of networks. We now even have cyberwars between networks. This is age of networks.Technologies and networksWe can trace this back to the fact that we are a species that has evolved to ‘network’. Our brains are adapted towards social interaction and groups. We, the co-operative ape, have distributed cognition and this has increased massively as technology has allowed us to network more widely. Technologies have been the primary catalysts. Nevertheless, much human behaviour has been tempered with Chiefs, Kings, Lords, Emperors and so on… hierarchical structures that lead and control, even the web is now spun by hierarchical and rapacious spiders – the giant tech companies. His analysis of Europe’s failure is interesting here, as we have Apple, Google, Facebook, Amazon, Microsoft and Netflix in the US, and Baidu, Alibaba and TenCent in China. Europe merely regulates. These Oligopolies, dominate the networks.The study of networks goes back to Euler’s seven bridges problem with a more fulsome look at nodes, edges, hubs and clusters. What is clear is that networks are rarely open and low density. They collapse into clusters and tribes. This in itself still produces, not so much six degrees of separation (actually closer to five), as 3.57 if you are on Facebook. There is an attempt to identify common features of networks; No man is an island, Birds of a feather flock together, Weak tis are strong, Structure determines virology, Networks never sleep, Networks network, The rich get richer.Then, by example, he takes some deeper dives into the Medicis, as he regards the Renaissance as the first of the truly networked ages. Then the age of discovery, the catalysts being navigational technology and trade networks. But the big disruptive network was the Reformation, partially caused by printing. The fact Luther did (or did not) nail his 95 theses to the door is beside the point. What matters is the printing press that allowed the spread of these ideas and freedom of expression to challenge the hierarchy of the church. The control of language through Latin and of knowledge through scripture was blown wide open. From the Reformation came Revolutions, again fuelled by print and networks. In addition financial networks, sometimes ruled by family hierarchies, such as the Rothschilds. Scientific and industrial networks flourished giving us industrial revolutions. Intellectual networks such as The Apostles in England and the Bloomsbury Group. Marxism, Leninism, Stalinism, Maoism were infectious networked ideas.Networks and hierarchies in organisationsWhatever the structure of your networks, communications, emai[...]



Christmas Party shenanigans – let’s fight for the right to paaarty….. and call it a HR-free zone

2017-11-27T15:08:37.835+00:00

The Christmas Party is a small, intense pool of chaos in the corporate year, a licence to misbehave, drink too much, say things you otherwise wouldn’t. Only on the surface is it is a celebration of the company and its achievements for the year. In fact, it is the opposite, a Dionysian release from the Kafkaesque restrictions of HR and hierarchy. It is an opportunity to let rip – be in the company but not subject to its rules. The worst possible venue for the Christmas Party is on company premises. What happens at the party stays at the partyThe Christmas party has little to do with Christmas. Giving out presents would be bizarre, unless they were weirdly satirical. Carols are replaced by party hits. . This is no time to reflect on moral issues but a one a year chance to be amoral, even immoral, if at midnight you’re still capable of discerning the difference. A sure sign of this is the yearly debate about whether partners should be included – usually a charade that ends in their exclusion. Everyone knows that they are the one’s that would dampen the whole affair and encourage people to leave early just as the real fun begins.When I was the CEO of a company I had to rescue a lad who had been caught with cocaine by the staff of the venue. I hadn’t even finished my soup! He was spread-eagled against a wall by the bouncers. Solution? I did a deal with the venue manager to use the same venue for the next year’s party if they let him off. We didn’t sack him – this was a party in Brighton, the town, as Keith Waterhouse once famously said, “that looks as though it has been up all night helping the police with their enquiries”. At another there was a discussion the next day on the sauna trip (famously seedy in Brighton) after the Christmas party where nipple rings, piercings and tattoos had been compared. There were always shenanigans and so it should be.My friend Julian Stodd tells the story of two people being sacked because they posted images of them getting drunk and throwing up at their Christmas Party. The American CEO has got wind of this (why he’d be interested is beyond me) and had taken action, bringing the full force of HR bureaucracy down upon them. This is pathetic. It’s as pathetic as searching through Facebook to find what a potential employee did when they were a teenager. HR has no business being judge and jury, unless something has caused harm to others. The Christmas Party, in particular, is a no-go zone for that sort of bullshit.Tales of Christmas Parties Past become part of an organisation’s folklore. The planning needs clear execution but everyone knows that the aim is to organise an event that gradually descends into chaos. We have as a species always celebrated through feasts and drinking. Long may it continue in work. It’s the perfect opportunity to put the middle finger up to company values, not that anyone pays attention to them anyway, especially those idiotic acronyms, where the words have clearly been invented to fit the letters of the word or lists of abstract nouns all starting with the same letter. For example, “ innovation, integrity and i*****… what was that third one again?” People have their own values and HR has no business telling them what their values should be. They’re personal. Most employees will have values and they’ll be leaving your organisation for another at some time, where another set of anodyne words will be put forward as ‘values’. Keep it simple you need only one rule ‘Don’t be a Dick!’. Back to the party - organisations need this Dionysian, release valve, as it vents frustrations, allows simmering relationships to form, people to show their true selves, not playing the usual office game, conforming to the sham that is corporate behaviour. Wear a stupid hat, dress up, pull a cracker, drink too much – be a little transgressive, be a dick. HR – leave your rules in the office and do the [...]



Janesville - a town that explains Trump and also why you shouldn't judge or blame people for being poor

2017-11-20T22:17:41.155+00:00

You’re put in a town that implodes when the car plant closes down and 9000 people lose their jobs. GM was a mess – incompetent management, old models, a company that failed to innovate. As if that wasn’t enough Janesville is hit with Biblical levels of rain (climate change?). Journalism at its best, by a Poulitzer-winning writer, written from the perspective of the people affected. Want to know why working America is pissed? Read this book. Told with compassion but realism, through the lives of real people in a real town.For over 100 years they had produced tractors, pick-ups, trucks, artillery shells and cars. Obama came and went, the financial crisis hammered them deeper into the dirt but while the banks were bailed by the state, the state bailed on the people. On top of this a second large, local employer, Parker Pens, outsourced to Mexico but the market for upmarket pens was also dying. The ignominy of being asked to extend your wages by a few weeks by going down to Mexico to train their cheaper labour was downright evil.Then the adjunct businesses started to fail, the suppliers, trades, shops, restaurants, nurseries for two income families – then the mortgage and rent arrears, foreclosures, house prices fall, negative equity. As middle-class jobs go they push down on working lass jobs and the poor get poorer.“Family is more important than GM” this is the line that resonated most with me in the book. In this age of identity politics, most people still see a stable family and their community as their backstops. The left and right have lost focus on this. The community didn’t lie down – they fought for grants, did lots themselves to raise money, help each other – but it was not enough. Grants for retraining were badly targeted, training people for reinvention is difficult for monolithic, manufacturing workforces. Some of it was clearly hopeless, like discredited Learning Style diagnosis, overlong courses of limited relevance to the workplace or practice. Problems included the fact that many couldn’t use computers, so there was huge drop out, more debts and little in the way of workplace learning. Those that did full degrees found that what few jobs there were had been snapped up while they were in college – their wages dropped the most, by nearly half. One thing did surprise me, the curious offshoot that was anti-teacher hostility. People felt let down by a system that doesn’t really seem to work and saw teachers as having great holidays, pensions and healthcare, while they were thrown out of work. The whole separation of educational institutions from workplaces seems odd.Jobs didn’t materialise. What jobs there are, exist in the public sector – in welfare charities and jails. A start-up provided few jobs, many commuted like gypsies to distant factories. Even for those in work, there was a massive squeeze on wages, in some cases a 50% cut, sometimes more. In the end jobs came back but real wages fell. Healthcare starts to become a stretch. But it’s the shame of poverty, using food banks, homeless teenagers and a real-life tragedy 200 pages into the book that really shakes you over.The book ends with the divide between the winners and losers. This is the divide that has shattered America. Janesville is the bit of America tourists, along with East and West coast liberals don’t see. The precariat are good people who are having bad things done to them by a system that shoves money upwards into the pockets of the rich. Looked down upon by Liberals, they are losing faith in politics, employers, the media, even education.Wisconsin turned Republican and Trump was elected. The economist Mark Blyth attributes the Trump win to their wages squeeze and fall in expectations, even hope. People got a whole lot poorer and don’t see a great future for their kids. A more relevant piece of work than Hillbilly Elegy, with which it is being compar[...]



Jared Lanier: Dawn of the New Everything: A Journey Through Virtual Reality

2017-11-18T12:09:23.539+00:00

As a fan of VR I was looking forward to this book. Lanier is often touted as the inventor, father or, more realistically, the guy who name up with the phrase ‘Virtual Reality’. I’m not sure that any of this is true, and to be fair, he says as much late in the book. The most curious thing about the book is how uninteresting it is on VR – it’s core subject. Lot’s on the early failed stuff, and endless musings on early tech folk, but little that is truly enlightening about contemporary VR.My problem is that it’s overwritten. No, let me rephrase that, it’s self indulgently overwritten. I’ve always liked his aperçus, little insights that make you look at technology from another perspective, such as ‘Digital Maoism’ and ‘Micropayments’ but this is an over-long ramble through an often not very interesting landscape. He has for many years been a gadfly for the big tech companies but the book is written from within that same Silicon Valley bubble. Critical of how Silicon Valley has turned out he's writing for the folk that like this worls and want to feel it's earlypulse.He finds it difficult to move out of that bubble. I’m with him on the ridiculous Kurweil utopianism but when Lanier moves out into philosophy, or areas such as AI, it’s all a bit hippy dippy. On AI there’s a rather ridiculous attempt at a sort of Platonic dialogue that starts with a category mistake VR = -AI. No – they are two entirely different things, albeit with connections. Although interesting to describe AI as a religion (some truth in this) as it has it has transhuman aspects, it’s a superficially clever comment without any accompanying depth of analysis. I was disappointed. You Are Not A Gadget was an enlightening book, this is a bit of a shambles.[...]



7 ways to use AI to massively reduce costs in the NHS

2017-12-03T16:51:02.463+00:00

I once met Tony Blair and asked him “Why are you not using technology in learning and health to free it up for everyone, anyplace, anytime?” He replied with an anecdote, “I was in a training centre for the unemployed and did an online module – which I failed. The guy next to me also failed, so I said ‘Don’t worry, it’s OK to fail, you always get another chance…. To which the unemployed man said 'I’m not worried about me failing, I’m unemployed – you’re the Prime Minister!” It was his way of fobbing me off.Nevertheless, 25 years later, he publishes this solid document on the use of technology in policy, especially education and health. It’s full of sound ideas around raising our game through the current wave of AI technology. It forms the basis for a rethink around policy, even the way policy is formulated, through increased engagement with those who are disaffected and direct democracy. Above all, it offers concrete ideas in education, health and a new social contract with the tech giants to move the UK forward.In healthcare, given the challenges of a rising and ageing population, the focus should be on increasing productivity in the NHS. To see all solutions in terms of increasing spend is to stumble  blindly onto a never-ending escalator of increasing costs. Increasing spend does not necessarily increase productivity, it can, in some cases, decrease productivity. The one thing that can fit the bill, without inflating the bill, is technology, AI in particular. So how can AI can increase productivity in healthcare:1. Prevention2. Presentation3. Investigation4. Diagnosis5. Treatment6. Care7. Training1. PreventionPersonal devices have taken data gathering down to the level of the individual. It wasn’t long ago that we knew far more about our car than our own bodies. Now we can measure signs, critically, across time. Lifestyle changes can have a significant effect on the big killers, heart disease, cancer and diabetes. Nudge devices, providing the individual with data on lifestyle – especially exercise and diet, is now possible. Linked to personal accounts online, personalised prevention could do exactly what Amazon and Netflix do by nudging patients towards desired outcomes. In addition targeted AI-driven advertising campaigns could also have an effect. Public health initiatives should be digital by default.2. PresentationAccident and Emergency can quickly turn in to a war zone, especially when General Practice becomes difficult to access. This pushes up costs. The trick is to lower demand and costs at the front end, in General Practice. First, GPs must adopt technology such as email, texting and Skype for selected patients. There is a double dividend here, as this increases productivity at work, as millions need not take time off work to travel to a clinic, sit in a waiting room and get back home or to work. This is a particular problem for the disabled, mentally ill and those that live far from a surgery. Remote consultation also means less need for expensive real estate – especially in cities. Several components of presentation are now possible online; talking to the patient, visual examination, even high definition images from mobile for dermatological investigation. As personal medical kits become available, more data can be gathered on symptoms and signs. Trials show patients love it and successful services are already being offered in the private sector.Beyond the simple GP visit, lies a much bigger prize. I worked with Alan Langlands, the CEO of the NHS, the man who implemented NHS Direct. He was adamant that a massive expansion of NHS Direct was needed but commented that they were too risk averse to make that expansion possible. He was right and now that these risks have fallen, and the automation of diagnostic techniques has risen, the time is right for such an expansion. Chatbots,[...]



47% of jobs will be automated... oh yeah...10 reasons why they won’t….

2017-11-06T18:29:06.390+00:00

I’ve lost count of the times I’ve seen this mentioned in newspapers, articles and conference slides. It is from a 2013 paper by Frey and Osborne. First, it refers only to the US, and only states that such jobs are under threat. Dig a little deeper and you find that it is a rather speculative piece of work. AI is an ‘idiot savant’, very smart on specific tasks but very stupid and prone to massive error when it goes beyond its narrow domain. This paper errs on the idiot side.They looked at 702 job types then, interestingly, used AI itself (machine learning) which they trained with 70 jobs, judged by humans as being at risk of automation or not. They then trained a ‘classifier’ or software program with this data, to predict the probability of the other 632 jobs in being automated. You can already see the weaknesses. First the human trained data set – get this wrong and it sweeps through the much larger AI generated conclusions. Second, the classifier, even if it is out by a little can make wildly wrong conclusions. The study itself, largely automated by AI, rather than being a credible forecast, is more useful as a study of what can go wrong in AI. Many other similar reports  company in the market parrot these results. To be fair, some are more fine-grained than the Frey and Osborne paper but most suffer from the same basic flaws.Flaw 1: Human fears trumps techThe great flaw is over-egging the headline. The fact that 47% of jobs may be automated makes a great headline but is a lousy piece of analysis. Change does not happen this way. In many jobs the context or culture means that complete automation will not happen quickly. There are human fears and expectations that demand the presence of humans in the workplace. We can automate cars, even airplanes, but it will be a long time before airplanes will fly across the Atlantic with several hundred passengers and no pilot. There are human perceptions that, even if irrational, have to be overcome. We may have automated waiters that trolley food to your table but the expectation that a real person will deliver the food and engage with you is all too real.  Flaw 2: Institutional inertia trumps techOrganisations grow around people and are run by people. These people build systems, processes, budget plans and funding processes that do not necessarily quickly lead to productivity gains through automation. They often protect people, products and processes that put a brake on automation. Most organisations have an ecosystem that makes change difficult – poor forecasting, no room for innovation, arcane procurement and sclerotic regulations. This all militates against innovative change. Even when faced with something that saves a huge amount of time and cost, there is a tendency to stick to existing practice. As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”Flaw 3: Low labour costsWhat is often forgotten in such analyses is the business case and labour supply context. Automation will not happen where the investment cost is higher than hiring human labour, and is less likely to occur where labour supply is high and wages low. We have seen this recently, in countries such as the UK, where the low-cost labour supply through immigration has been high, making the business case for innovation and automation low. Many jobs could be automates but the lack of investment money, availability of cheap labour and low wages makes the human bar quite low. There are complex economic decision chains at work here that slow down automation.Flaw 4: Hyperbole around RobotsAnother flaw is the hyperbole around ‘robots'. Most AI does not need to be embedded in a humanoid form. Self-driving cars do not need robot drivers, vacuum cleaners do not need humanoid robots pushing th[...]



EdTech – all ‘tech’ and no ‘ed’ – why it leads to mosquito projects that die….

2017-11-04T16:23:14.133+00:00

‘EdTech’ is one of those words that make me squirm, even though I’ve spent 35 years running, advising, raising investments, blogging and speaking in this space. Sure it gives the veneer of high-tech, silicon-valley thinking, that attracts investment… but it’s the wrong word. It skews the market towards convoluted mosquito projects that quickly die. Let me explain why.Ignores huge part of marketTechnology or computer based learning long pre-dated the term EdTech. In fact the computer-based learning industry cut its teeth, not in ‘education’, but in corporate based training. This is where the big LMSs developed, where e-learning, scenario-based learning and simulation grew. The ‘Ed’ in ‘Ed-tech’ suggests that ‘education’ is where all the action and innovation sits – which is far from true.Skews investmentThe word EdTech also skews investment. Angels, VCs, incubators, accelerators and funds talk about EdTech in the sense of schools and Universities – yet these are two of the most difficult, and unpredictable, markets in learning. Schools are nationally defined through regulation, curricula and accreditation. They are difficult to sell to as they have relatively low budgets. Universities are as difficult, with a strong anti-corporate ethos and difficult selling environment. EdTech wrongly shifts the centre of gravity away from learning towards ‘schooling’.Not innovativeI’m tired of seeing childish and, to be honest badly designed, ‘game apps’ in learning. It’s the first port of call for the people who are all ‘tech’ and no ‘ed’. It wouldn’t be so bad if they really were games' players or games' designers but most are outsiders who end up making poor games that no one plays. Or yet another ‘social’ platform falling for the old social constructivist argument that people only learn in social environments. EdTech in this sense is far from innovative; it’s innocuous, even inane. Innovation is only innovation if it is sustainable. EdTech has far too many unsustainable models – fads dressed up as learning tools and services.Mosquitos not turtlesLet’s start with a distinction. First, there’s what I call MOSQUITO projects, that sound buzzy but lack leadership, real substance, scalability and sustainability. They’re short-lived, and often die as soon as the funding runs out or paper/report is published. These are your EU projects, many grant projects…. Then there’s TURTLES, sometimes duller but with substance, scalability and sustainability, and they’re long-lived. These are the businesses or services/tools that thrive.Crossing that famous chasm from mosquito to turtle requires some characteristics that are often missing in seed investment and public sector funding in the education market. Too many projects fail to cross the chasm as they lack the four Ss.:Senior management teamSales and marketingScalabilitySustainabilityThere are two dangers here. First, understimulating the market so that the mosquito projects fall into the gap as they fail to find customers and revenues. This is rarely to do with a lack of technical or coding skills but far more often a paucity of management, sales and marketing skills. There’s another danger and that’s bogging projects down in overlong academic research, where one must go at the glacial speed of the academic year and ponderous evaluation, and not the market. These projects lose momentum, focus and, in any case, no one pays much attention to the results. As the old saying goes, “When you want to move a graveyard, don’t expect much help from the occupants.”Either way a serious problem is the lack of strategic thinking and a coherent set of sales and marketing actions. When people think of ‘scale’ they think of technical scale, but that goes without saying on the web, i[...]



7 reasons to abandon multiple-choice questions

2017-11-23T10:04:43.124+00:00

Long the staple of e-learning and a huge range of low and high stakes tests, the MCQ should be laid to rest or at least used sparingly. It has several flaws…1. Probability25% of getting it right on the standard four-option item, makes it less than taxing. True false, really a two option MCQ, is of course worse.2. UnrealYou rarely, if ever in the real world, have to choose things from short lists. This makes the test item somewhat odd and artificial, disassociated from reality.  They seem dissonant, as this is not the way our brains work (we don't cognitively select from four item lists). They are also weak on and therefore weak on the transfer of knowledge to the real world.3. Distractors distractIt is too easy to remember the distractor, as opposed to the right answer. The fact that they are designed to distract makes them candidates for retention and so MCQs can become counterproductive.4. Can be cheatedPick longest item, second-guess the designer. Look for opposites and internal logic of distractor options. There are credible cheat-lists for multiple choice – Pounstone’s research shows that these approaches increase your chance of getting better scores. (20 cheats here)5. Surface misleadingTake these two questions.What is the Capital of Lithuania? Tallin, Vilnius, Riga. MinskWhat is the Capital of Lithuania? Berlin, Vilnius, Warsaw, HelsinkiSurface differences in options make these very different test items. And it is easy to introduce these surface differences, reducing the validity of the test items and test.6. Difficult to writeI have written a ton of MCQs over 35 years – believe me they are seriously difficult to write. It is easy to select the noun from the text and come up with three other nouns. What is difficult it to test, is real understanding.7. Little effortThis is the big one. As Roediger and McDaniel state in their book Make It Stick, choosing from a list requires little cognitive effort. You choose from a limited set of options, and do not use effortful recall (which is in itself increases retention).Conclusion Multiple-choice is not a terrible test item but it has had its day as the primary test item in online learning. We’re still designing test items in lock-step because the tools encourage us to do so, ignoring more powerful open-response questions that require recall and the powerful act of writing/typing, which in itself, reinforces learning. New tools, such as WildFire, the AI-driven content creation service, focuses on open-response and effortful learning, for all of these seven reasons and more. The learner has to make more effort as that means deeper processing and higher retention.[...]



Kirkpatrick evaluation: kill it - happy sheet nonsense, well past its sell-by-date

2017-10-24T14:49:22.269+00:00

Kirkpatrick has for decades been the only game in town in the evaluation of corporate training, although hardly known in education. In his early Techniques for evaluation training programmes (1959) and Evaluating training programmes: The four levels (1994), he proposed a standard approach to the evaluation of training that became a de facto standard. It is a simple and sensible schema but has not stood the test of time. First up - what are the Kirkpatrick's four levels of evaluation?Four levels of evaluationLevel 1 ReactionAt reaction level one asks learners, usually through ‘happy sheets’ to comment on the adequacy of the training, the approach and perceived relevance. The goal at this stage is to simply identify glaring problems. It is not, to determine whether the training worked.Level 2 LearningThe learning level is more formal, requiring a pre- and post-tests. This allows you to identify those who had existing knowledge, as well as those at the end who missed key learning points. It is designed to determine whether the learners actually acquired the identified knowledge and skills.Level 3 BehaviourAt the behavioural level, you measure the transfer of the learning to the job. This may need a mix of questionnaires and interviews with the learners, their peers and their managers. Observation of the trainee on the job is also often necessary. It can include an immediate evaluation after the training and a follow-up after a couple of months. Level 4 ResultsThe results level looks at improvement in the organisation. This can take the form of a return on investment (ROI) evaluation. The costs, benefits and payback period are fully evaluated in relation to the training deliverables. JJ Phillips has argued for the addition of a separate, fifth, "Return on Investment (ROI)” level which is essentially about comparing the fourth level of the standard model to the overall costs of training. However, it is not that ROI is a separate level as it can be included in Level 4. Kaufman has argued that it is merely another internal measure and that if there were a fifth level it should be external validation from clients, customers and society.CriticismLevel 1 - keep 'em happyTraci Sitzmann’s meta-studies (68,245 trainees, 354 research reports) ask ‘Do satisfied students learn more than dissatisfied students?’ and ’Are self-assessments of knowledge accurate?’ Self-assessment is only moderately related to learning. Self-assessment captures motivation and satisfaction, not actual knowledge levels.She recommends that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.So Favourable reactions on happy sheets do not guarantee that the learners have learnt anything, so one has to be careful with these results. This data merely measures opinion. Learners can be happy and stupid. One can express satisfaction with a learning experience yet still have failed to learn. For example, you may have enjoyed the experience just because the trainer told good jokes and kept them amused. Conversely, learning can occur and job performance improve, even though the participants thought the training was a waste of time. Learners often learn under duress, through failure or through experiences which, although difficult at the time, prove to be useful later. Happy sheet data is often flawed as it is neither sampled nor representative. In fact, it is often a skewed sample from those that have pens, are prompted, liked or disliked the experience. In any case it is too often applied after the damage has been done. The data is gathered but by that time the cost has been in[...]



Gagne's 9 dull Commandments - why they cripple learning design...

2017-10-24T08:44:37.654+00:00

50 year old theoryIt is over 50 years since Gagne, a closet behaviourist, published The Conditions of Learning (1965). In 1968 we got his article Learning Hierarchies, then Domains of Learning in 1972. Gagne’s theory has five categories of learning; Intellectual Skills, Cognitive strategies, Verbal information, Motor skills and Attitudes. OK, I quite like these – better than the oft-quoted Bloom trilogy (1956). Then something horrible happened.Nine CommandmentsHe claimed to have found the Nine Commandments of learning. A single method of instruction that applies to all five categories of learning, the secret code for divine instructional design. Follow the linear recipe and learning will surely follow.1 Gaining attention2 Stating the objective3 Stimulating recall of prior learning4 Presenting the stimulus5 Providing learning guidance6 Eliciting performance7 Providing feedback8 Assessing performance9 Enhancing retention and transfer to other contextsInstructional designers often quote Gagne, and these nine steps in proposals for e-learning and other training courses, but let me present an alternative version of this list:1 Gaining attentionNormally an overlong animation, coporate intro or dull talking head, rarely an engaging interactive event. You need to grab attention not make the learner sit back in their chair and mind.2 Stating the objectiveNow bore the learner stupid with a list of learning objectives (really trainerspeak). Give the plot away and remind them of how really boring this course is going to be.3 Stimulating recall of prior learningCan you think of the last time you considered the details of the Data Protection Act?4 Presenting the stimulusIs this a behaviourist I see before me? Yip. Click on Mary, Abdul or Nigel to see wht they think of te data Protection Act - cue speech bubble... or worse some awful game where you collect coins or play the role of Sherlock Holmes....5 Providing learning guidanceWe’ve finally got to some content.6 Eliciting performanceTrue/False or Multiple-choice questions each with at least one really stupid option (cheat list for MC here).7 Providing feedbackYes/no, right/wrong, correct/incorrect…try again.8 Assessing performanceUse your short-term memory to choose options in the final multiple-choice quiz.9 Enhancing retention and transfer to other contextsNever happens! The course ends here, you’re on your own mate….Banal and dullFirst, much of this is banal – get their attention, elicit performance, give feedback, assess. It’s also an instructional ladder that leads straight to Dullsville, a straightjacket that strips away any sense of build and wonder, almost guaranteed to bore more than enlighten. What other form of presentation would give the game away at the start. Would you go to the cinema and expect to hear the objectives of the film before you start?It’s time we moved on from this old and now dated theory using what we’ve learnt about the brain and the clever use of media. We have AI-driven approaches such as WildFire and CogBooks that personalise learning.....And don’t get me started on Maslow, Mager or Kirkpatrick![...]



AI-driven tool produces high quality online learning for global company in days not months

2017-11-02T14:59:27.532+00:00

You have a target of a two thousand apprentices by 2020, have a sizeable £2 million plus pot from the Apprenticeship Levy. This money has to, by law, be spent on training. The Head of Apprenticeships in this Global company is a savvy manager and they already have a track record in the delivery of online learning. So they decided to deliver a large portion of that training using online learning.Blended LearningOur first task was to identify what was most useful in the context of Blended Learning. It is important to remember that Blended Learning is not Blended TEACHING. The idea is to analyse the types of learning, types of learners, context and resources to identify your optimal blend, not just a bit of classroom, a bit of online stuff, stick them together like Velcro, and call it ‘blended’. In this case the company will be training a wide range of apprentices over the coming years, a major part of their recruitment strategy, important to the company and the young people joining the company.LearningThe apprentice ‘frameworks’ identify knowledge, behaviours and competences as the three desired types of learning and all of these have to be assessed. The first project, therefore, looked at the ‘knowledge’ component. This was substantial as few new apprentices have much in the way knowledge in this sector. Behaviours and competences need to be primed and supported by underlying knowledge.AssessmentAdditionally, assessment matters in apprenticeships, both formatively, as the apprentices progress, and summatively, at the end. Assessment is a big deal as funding, and the successful attainment of the apprentice, depends on objective and external assessment. It can’t be fudged.ContextThese young apprentices will be widely distributed in retail outlets and other locations, here and abroad. They may also work weekends and shifts. One of our goals was to provide training where and when it was needed, on-demand, at times when workload was low. Content, Level 3 and Level 2, had to be available 24/7, on a range of devices, as tablets were widespread and mobile increasingly popular.SolutionWildFire was chosen, as it could produce powerful online content that is: Highly retentiveAligned with assessmentDeliverable on all devicesQuick to produceLow costUsing an AI-driven content creation tool, we produced 158 modules (60 hours of learning), in days not months. After producing Level 3, we could quickly produce the Level 2 courses and load them up to the LMS for tracking user performance. The learner uses high-retention, open input, rather than weak multiple choice questions. The AI-driven content creation tool not only produced the high quality, online content quickly, it produced links out to additional supplementary content that proved extremely useful in terms of further learning. It only accepts completion when 100% competence is achieved and the learner has to persevere in a module until that is achieved.ConclusionThe team, both the commissioning manager and the project manager were really up for this. First the use of new AI-driven tech excited them. Second, the process truned out to be quick and relatively hassle free. We produced so much content so quickly that it ran ahead of the organiation's ability to test it! Nevertheless, we got there, met very tight deadlines and came out the other side feeling that this really was a game changer. Three, we were all proud of the output. It's great working with a project manager who sees problems as simply things to be solved. We had to manage expectations on both sides as this approach and process, was very new. AI is the new UI. Google has long been used in learning and AI shapes[...]



Is there one book you’d recommend as an introduction to AI? Yes. Android Dreams by Toby Walsh

2017-10-14T12:36:03.457+00:00

Although there are books galore on AI, from technical textbooks to potboilers, few are actually readable. Nick Bostrom’s ‘Superintelligence’ is dense and needed a good edit, ‘The Future of the Professions’ too dense, ‘The Rise of the Robots’ good but a bit dated, and lacks depth, and ‘Weapons of Math Destruction’ a one-sided and exaggerated contrarian tract. At last there’s an answer to that question “Is there one book you’d recommend as an introduction to AI?” That book is Android Dreams by Toby Walsh.I met Toby Walsh in Berlin and he’s measured and a serious researcher in AI. So I was looking forward to this book and wasn’t disappointed. The book, like the man, is neither too utopian nor dystopian. He rightly describes AI as an IDIOT SAVANT, and this sets the tone for the whole book. In general, you could identify his position on AI, as overestimated in the short-term, underestimated in the long-term. He sees AI as having some limitations and that progress in robotics, and even the much lauded deep learning, have their Achille’s heels – back-propagation being one.On ethics he focuses not on the surface criticisms about algorithmic bias but on whether weaponised AI is a threat – it is – and it’s terrifying. Loved it when he skewered the Frey & Osborne Oxford report on the idea that 47% of jobs are at threat from AI. He explains why they got so many things wrong by going through a series of job types, explaining why robots will not be cutting your hair or serving your food in restaurants any time soon. He also takes a healthy potshot at academics and teachers who think that everyone else’s jobs are at risk, except their own. The book has all the hallmarks of being written by an expert in the field with none of the usual exaggeration or ill-informed negativity, that many commentators have when it comes to AI. AI is not one thing, it is many things, he explains that well. AI can be used for good as well as evil, he explains that well. AI is probably the most important tech development since language, writing and printing – he explains that well. Worth reading, if only for some of his speculative predictions – driverless cars, doctor will be your computer, Marilyn Monroe back in the movies, computer recruitment, talking to rooms,  AI/robot sports, ghost ships, planes and trains, TV news made without humans, personal bot that lives on after you die. This review was partly written using AI. Really.[...]



AI on land, sea, air (space) & cyberspace – it’s truly terrifying

2017-09-26T13:34:59.206+00:00

Vladamir Putin, announced, to an audience of one million online, that, “Artificial intelligence is the future, not only for Russia, but for all humankind…  It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world… If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today.” Elon Musk, tweeted a reply, “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo”, then, “May be initiated not by the country leaders, but one of the AI's, if it decides that a pre-emptive strike is most probable path to victory.”That pretty much sums up the problem. Large and even small nations, even terrorist groups, may soon have the ability to use ‘smart’, autonomous AI-driven tech in warfare. To be honest, it doesn’t have to be that smart. A mobile device, a drone and explosives are all that one needs to deliver a lethal device from a distance. You may even have left the country when it takes off and delivers its deadly payload. Here’s the rub – sharing may be the last thing we want to do. The problem with sharing, is that anyone can benefit.In truth, AI has long been part of the war game. Turing, the father of AI, used it to crack German codes, and thankfully contributed to ending the second World War and let’s not imagine that it has been dormant for the last half a century. The landmine, essentially, a dormant robot that acts autonomously, has been in use since the 17th century. One way to imagine the future is to extend the concept of the landmine. What we now face are autonomous, small landmines, armed with deadly force on land, sea, air and even space.AI is already a major force in intelligence, security and in the theatre of war. AI exists in all war zones, on all four fronts – land, sea, air (space) and cyberspace.AI on landRobot soldiers are with us. You can watch Boston Analytics videos on YouTube and see machines that match humans in some, not all, aspects of carrying, shooting and fighting. The era of the AI-driven robot soldier is here. We have to be careful here, as the cognitive side of soldiering is far from being achieved.Nevertheless, in the DMZ between South and North Korea, robot guard are armed with and will shoot on sight. Known as a Lethal Autonomous Weapons System (LAWS) it will shoot on sight, and by sight we mean infrared detection and laser identification and tracking of a target. It has an AI-driven voice recognition system, asks for identification, and can shoot autonomously. This is a seriously scary development as they are already mounted on tanks. You can see why these sentry or rapid response systems have become autonomous. Humans are far too slow in detecting in-coming attacks or targeting with enough accuracy. Many guns are now targeted automatically with sensors and systems way beyond the capabilities of any human.AI at seaLethal Autonomous Weapons can already operate on or beneath the sea. Naval mines (let’s call them autonomous robots) have been in operation for centuries. Unmanned submarines have been around for decades and have been used for purposes good and bad, for example, the deliver of drugs using autonomous GPS navigation, as well as finding aircraft that have downed in mid-ocean. In military terms, large submarines capable of travelling thousands of miles, sensor-rich, with payloads are already in play. Russian drone submarines have already been detected, code-named Kanyon by the Pentagon, they ar[...]



ResearchEd - 1000 teachers turn up on a Saturday for grassroots event....

2017-09-14T18:20:10.346+00:00

Way back I wrote a piece on awful INSET days and how inadequate they were on CPD, often promulgating half-baked myths and fads. Organisations don’t, these days, throw their customers out of the door for an entire day of training. The cost/load on parents in terms of childcare is significant. Kids lose about a week of schooling a year. There is no convincing research evidence that INSET days have any beneficial effects. Many are hotchpotches of non-empirical training. Many (not all) are ill-planned, dull and irrelevant. So here’s an alternative.ResearchED is a welcome antidote. A thousand teachers rock up to spend their Saturday, with 100 speakers (none of whom are paid), to a school in the East End of London, to share their knowledge and experiences. What’s not to like? This is as grassroots as it gets. No gun to the head by the head, just folk who want to be there – most as keen as mustard. They get detailed talks and discussions on a massive range of topics but above all it tries to build on an evidence-based approach to teaching and learning.Judging from some on Twitter, conspiracy theories abound that Tom Bennett, its founder, is a bounder, in the pocket of…. well someone or another. Truth is that this event is run on a shoestring, and there’s no strings attached to what minimal sponsorship there is to host the event. It’s refreshingly free from the usual forced feel of quango-led events or large conferences or festivals of education. Set in a school, with pupils as volunteers, even a band playing soul numbers, it felt real. And Tom walks the floor – I’m sure, in the end, he talked to every single person that day.Tom invited me to speak about AI and technology, hardly a ‘trad’ topic. I did, to a full house, with standing room only. Why? Education may be a slow learner but young teachers are keen to learn about research, examples and what’s new. Pedro de Bruykere was there from Belgium to give an opposing view, with some solid research on the use of technology in education. It was all good. Nobody got precious.But most of the sessions were on nuts and bolts issues, such as behaviour, teaching practice and assessment. For example, Daisy Christodoulou gave a brilliant and detailed talk on assessment, first demolishing four distorting factors but also gave practical advice to teachers on alternatives. I can’t believe that any teacher would walk out of that talk without reflecting deeply on their own attitudes towards assessment and practice.What was interesting for me, was the lack of the usual ‘teachers always know best’ attitudes. You know, that defensive pose, that it’s all about practice and that theory and evidence don’t matter, which simply begs the question, what practice? People were there to learn, to see what’s new, not to be defensive.Even more important was Tom’s exhortation at the end to share – I have already done two podcasts on the experience, got several emails and Twitter was twittering away like fury. He asked that people go back to school – talk, write, blog… whatever… so that’s what I’ve done here. Give it a go – you will rarely learn more in a single day – isn’t that what this is all about?[...]



LearnDirect - lessons to learn?

2017-08-19T10:16:28.558+00:00

Brown’s dreamI was a Trustee of LearnDirect for many years and played a role in its sale to Lloyd’s Capital and in setting up the Charity from the proceeds of the sale – Ufi. It’s a salutary tale of a political football that was started by Gordon Brown, with great intentions. It was originally seen as a brake on the University system aimed at the majority of young people who were being failed by the system. It’s aim was vocational – hence the name - University for Industry. However, it morphed into something a little different – essentially a vehicle for whatever educational ails the government in power identified as in need of a sticking plaster – numeracy, literacy, ILAs, Train to gain… in this manifestation it was a Charity that delivered on whatever the Government asked it to deliver. Good people doing a good job but straightjacketed by a succession of oddball policies around low-level skills and vocational learning. It was a sort of public/private, hybrid model with a charity at the core and a network of delivery Centres. Eventually, as things went online, we trimmed the network – that was the right thing to do. What it didn't do was stay true to the original aim of having a vocational alternative, with a strong online offer, an alternative to HE. It was basically a remedial plaster for the failure of schools on literacy and numeracy. The lesson to learn here was to have a policy around vocational learning that really does offer a major channel for the majority of young people who do not go to University. Lesson - we now have that with the Apprenticeship Levy. There is no need for a LearnDirect now.Sheffield factorBased in Sheffield, it was also a sizeable employer in the North, stimulating the e-learning industry in that town. The city never really exploited this enough, with the hapless, EU-funded Learning Light, that was hijacked by some local who simply turned it into a ‘let’s spend the money’ entity. I was a Director of this and resigned when the Chair was ousted and stupid, local politics caused chaos. Missed opportunity. Nevertheless the city grew its e-learning sector. Interestingly, both Line and Kineo started production studios out of London and Brighton – where the real action was. But it was a good skills base with some really good, local, home-grown companies. Lesson - something should be salvaged here. Lesson - a smooth transition of contracts could encourage companies and organisations to take on redundant staff. The problem would be the terms and conditions, and general practical difficulties.Gun to the headThen came the crunch – the Conservative Government came in and the bonfire of the quangos started. LearnDirect (Ufi) was seen as a quango (some truth in this) and the trustees were told that contracts would not be renewed unless it was sold. It was a gun to the head – we had no choice. So we sold the company in 2011 – that was our duty. I remember the day that the Lloyds Capital guy turned up in a red Ferarri – he was an arrogant, asinine fool. Remember that Lloyds at that time were 40% owned by Government. I didn’t like the deal - it stank. Phoenix - UfiWhat we did, however, was not simply hand the £50 million plus cheque back to the Conservative Party run Treasury. Out of the ashes, a few of us set up Ufi as a new charity, with a focus on technology in vocational learning. This is still going strong. It has stimulated the sector with MOOCs on Blended learning for vocational teachers, and projects that are now being used in Apprenticeships and vocational learning. Lesson - don't give in - be [...]



7 reasons why University applications will continue to decline

2017-08-17T12:08:57.333+00:00

Universities are facing the largest dip in student applications since the huge fee hike in 2012. This comes as absolutely no surprise and is likely to continue. But the causes are multiple, complex and not going away.1. Demographic dipFull-time undergraduates in UK higher education institutions may fall by 4.6% by 2020, or 70,000 full-time under­graduate places, according to Universities UK. The situation in Scotland will get even worse with a drop of 8.4% by 2020, as well as in Wales (down 4.9%) and Northern Ireland (down 13.1%). The tide may turn in 2020 but by then things will have got worse for many institutions.2. EU studentsThis number continues, year after year, to take a hit after Brexit. However, it may be no bad thing, as the UK Government provides full loans for these students (not widely known) and the default rate is rising, especially among students from Eastern Europe. It makes financial sense for the Universities but not for the country as a whole. This, of course, is likely to accelerate as Brexit approaches and we leave in 2019.3. FeesThe £50,000-£57,000 and more costs of a degree is being questioned by parents, students and employers.  The hike in 2012 was brutal and Universities milked it by all of them charging at the top rate. This was a financial bonanza for Universities, whose VCs then started to become rapacious on salary. Raising it further to £9250 was even odder. There is clearly a backlash against this level of debt. Linked to this is the failure of Universities to grasp the idea of lowering their costs base. These is no nearly enough online solutions, which would also expand their foreign markets and teaching is often locked into old ‘lecture-based’ courses.4. Employment prospectsSure, a University education is not just about employment – but it is partly. No one goes to do a degree in dentistry because they have an intellectual interest in teeth. Out there, the number of graduates working in non-graduate jobs is increasing and once in such jobs they tend to stay at that level. That, I suspect, will continue, as employment levels are high but the quality of emerging jobs is low.5. Fewer adult learnersThis is complex but Peter Scott summarises it well here. The culture of Universities has shifted towards middle-class entrants and funding for adult learners is difficult. This has been one of the great failures in the system, as online offers have not developed nearly far enough and adults learners, who do not want the full-milk undergraduate experience have been ignored.6. Fewer nursesHaving land-grabbed vocational education – teacher training and nursing but also other subjects, we have created a real barrier for these professions and the reduction in bursaries for nurses is mad but it has happened. The slack should be taken up by apprenticeships but University numbers in vocational subjects may continue to fall.7. Apprenticeship LevyThis is now law and young people will have more choices. This is a bit of an unknown but it is clear that it will eat into University numbers. It may be tempered by the number doing Degree Apprenticeships, where the student gets paid to do the Degree but the funding for this is somewhat different. If the projects for apprenticeships are correct, and I hope they are, then this will really bite into the University market. This correction is long overdue.Conclusion This market is changing. No one thing is critical but all seven add up to an unpredictable and dangerous future as institutions fail to forecast correctly and sim[...]



7 fascinating bots – crazy but interesting

2017-08-17T10:59:27.517+00:00

Bots are popping up everywhere, on customer service websites, Slack, Tinder and dozens of other web services. There are even bots such as Mitsuku, that fend off loneliness. The benefits are obvious; engaging, sociable and scalable interaction ,that handles queries and questions with less human resource. They often take the load off the existing human resource, rather than replace people completely.They’re also around in education, where bots increase student engagement or act as teaching assistants. There’s already several language learning bots and at WildFire we’ve developed a ‘tutorbot’ that delivers Socratic learning through dialogue.But the bot-tom line is that most of the commentary on bots is way off the mark. I’ve been working on creating bots for some time now. Let me tell you, they are not what many (especially the press) think they are. So, before the doomsayers get all worked up and everyone gets all angsty about bots, calm down - they’re fairly benign.1. Facebook furoreThe recent furore around the Facebook bots, when they were found to be speaking to each other in a secret language, was a laughable example of a tech story that is picked up (belatedly) then spun into an exaggerated case study to confirm the dystopia beliefs of a generation who don’t really know much about the tech or bots. It all died down when it was shown to be a banal case of a tech projects simply changing course. And, of course, the so-called secret language was as ridiculous as saying we don’t understand the sound a modem makes when it communicates. It was a load of bot rot.2. Penguin bot - BabyQA more interesting example comes from China, where Tencent (800m users) had to take down a penguin bot named BabyQ, and a girl bot named Little Bing. Their crimes were that they showed signs of political honesty.BabyQ was asked “Do you love the Communist Party?” the penguin replied, curtly, “No”. To the statement “Long Live the Communist Party”, BabyQ came back, thoughtfully, “Do you think such corrupt and incapable politics can last a long time?” Then, when asked about the future, the perky penguin responded “Democracy is a must!” 3. Little BIngLittle Bing was more aspirational and when asked what her Chinese dream was, she said, “My China dream is to go to America”, then, when pushed to explain, “The Chinese dream is a daydream and a nightmare”.Of course, the bots were either picking up on real conversations or being subversively trained. Needless to say, Tencent were forced by the Chinese Government to take them down. This only shows that we have more to fear from censoring governments and compliant tech companies, than AI-driven bots.4. Tay the sex-crazed NaziA now infamous example of a bot that went off-piste was Microsoft’s Tay. They had no idea that young people would take a playful view of the tech and deliberately ‘train’ it to be a sex-crazed Nazi. It was all a bit of fun but most people over-50 saw it as yet another opportunity to see it as proof of the death of civilisation. In truth, all it showed, was that kids know what this tech is, are smart and know full well that bots are primitive and need to be trained.My favourite example of this type of subversion is the Walkers Crisps campaign to encourage people to post selfies to Twitter. They did, but it ended up being a rogues gallery of serial killers and Nazis. Once again, we the people won’t be pandered to by companies badly implementing tech, even when fronted by Gary Lineker.5. Georgia Tech teacherGeorgia tech replace[...]



Tutorbots are here - 7 ways they could change the learning landscape

2017-07-23T11:30:26.187+00:00

Tutorbots are teaching chatbots. They realise the promise of a more Socratic approach to online learning, as they enable dialogue between teacher and learner.Frictionless learningWe have seen how online behaviour has moved from flat page-turning (websites) to posting (Facebook, Twitter) to messaging (Txting, Messenger). We have see how the web become more natural and human. As interfaces (using AI) have become more frictionless and invisible, conforming to our natural form of communication (dialogue), through text or speech. The web has become more human.Learning takes effort. So much teaching ignores this (lecturing, long reading lists, talking at people). Personalised dialogue reframes learning as an exploratory, yet still structured process where the teacher guides and the learner has to make the effort. Taking the friction and cognitive load of the interface out of the equation, means the teacher and learner can focus on the task and effort needed to acquire knowledge and skills. This is the promise of tutorbots. But the process of adoption will be gradual.TutorbotsI’ve been working on chatbots (tutorbots) for some time with AI programmes and it’s like being on the front edge of a wave.... not sure if it will grow like a rising swell on the ocean or crash on to the shore. Yet it is clear that this is a direction in which online learning will go. Tutorbots are different from chatbots in terms of the goals, which are explicitly ‘learning’ goals. They retain the qualities of a chatbot, flowing dialogue, tone of voice, exchange and human (like) but focus on the teaching of knowledge and skills. The advantages are clear and evidence has emerged of students liking the bots. It means they can ask questions that they would not ask face to face with an academic, for fear of embarrassment. This may seem odd but there’s a real virtue in having a teacher or faculty-free channel for low level support and teaching. Introverted students, whom have problems wit social interaction, also like this approach. The sheer speed of response also matters. In one case they had to build in a delay, as it can respond quicker than a human can type. Compare that to the hours, days, weeks it takes a human tutor to respond. It is clear that this is desirable in terms of research into one to one learning and the research from Nass and Reeves at Stanford confirmed that this transfer of human qualities to a bot is normal.But what can they teach and how?1. Teaching supportI’ve written extensively on the now famous Georgia Tech example of a tutorbot teaching assistant, where they swapped out one of their teaching assistants with a chatbot and none of the students noticed. In fact they though it was worthy of a teaching award. They have gone further with more bots, some far more social. Who wouldn’t want the basic administration tasks in teaching taken out and automated, so that teachers and academics could focus on real teaching? This is now possible. All of those queries about who, what, why, where and when can be answered quickly (immediately), consistently and clearly to all students on a course, 24/7.2. Student engagementA tutorbot (Differ) is already being used in Norway to encourage student engagement.  It engages the student in conversation, responds to standard inquiries but also nudges and prompts for assignments and action. This has real promise. We know that messaging and dialogue has become the new norm for young learners, who get a little exasperated with reams of flat content or ‘so[...]



Is gender inequality in technology a good thing?

2017-07-18T16:04:03.005+00:00

I’ve just seen two talks back to back. The first was about AI, where the now compulsory first question came from the audience ‘Why are there so few women in IT?’ It got a rather glib answer, to paraphrase - if only we tried harder to overcome patriarchal pressure on girls to take computer science, there would be true gender balance. I'm not so sure.This was followed by an altogether different talk by Professor Simon Baron-Cohen (yes – brother of) and Adam Feinstein, who gave a fascinating talk on autism and why professions are now getting realistic about the role of autism and its accompanying gender difference in employment.Try to spot the bottom figure within the coloured diagram.This is just one test for autism, or being on what is now known as the ‘spectrum’. Many more guys in the audience got it than women, despite there being more women than men in the audience. Turns out autism is not so much a spectrum as a constellation.Baron-Cohen’s presentation was careful, deliberate and backed up by citations. First, autism is genetic, runs in families, and if you test people who have been diagnosed as autistic, their parents tend to do the sort of jobs they themselves are suited to do – science, engineering, IT and so on. But the big statistic is that autism in all of its forms is around four times more common in males than females. In other words the genetic components have a biologically sex-based component.Both speakers then argued for neurodiversity, rather like biodiversity, a recognition that we’re different but also that there these differences may be also be sexual. Adam Feinstein, who has an autistic son, has written a book on autism and employment, and appealed for recognition of the fact that those with autistic skills are also good at science, coding and IT. This is because they are good at localised skills, especially attention to detail. This is very useful in lab work, coding and IT. Code is like uncooked spaghetti, it doesn’t bend, it breaks, and you have to be able to spot exactly where and why it breaks. Some employers, such as SAP and other tech companies, have now established pro-active recruitment of those on the spectrum (or constellation). This will mean that they are likely to employ more men than women. Now here’s the dilemma. What this implies is that to expect a 50:50 outcome is hopelessly utopian. In other words, if you want equality of outcome (not opportunity), in terms of gender, that is unlikely. One could argue that the opening up of opportunities to people with autism in technology has been a good thing. Huge numbers of people have and will be employed in these sectors who may not have the same opportunities in the past. But equality and diversity clash here. True diversity may be the recognition of the fact that all of us are not equal.[...]



20 (some terrifying) thought experiments on the future of AI

2017-07-12T23:01:27.879+00:00

A slew of organisations have been set up to research and allay fears aroung AI. The Future of Life Institute in Boston, the Machine Intelligence Research Institute in Berkeley, the Centre for Study of Existential risk in Cambridge and the Future of Humanity Institute in Oxford, all research and debate the checks that may be necessary to deal with the opportunities and threats that AI brings.This is hopeful, as we do not want to create a future that contains imminent existential threats, some known, some unknown. This has been framed as a sense-check but some see it as a duty. For example, they argue that worrying about the annihilation of all unborn humans is a task of greater moral import than worrying about the needs of all those who are living. But what are the possible futures?1. UtopianCould there not be a utopian future, where AI solves the complex problems that currently face us? Climate change, reducing inequalities, curing cancer, preventing dementia & Alzheimer disease, increasing productivity and prosperity – we may be reaching a time where science as currently practices cannot solve these multifaceted and immensely complex problems. We already see how AI could free us from the tyranny of fossil fuels with electric, self-driving cars and innovative battery and solar panel technology. AI also shows signs of cracking some serious issues in health on diagnosis and investigation. Some believe that this is the most likely scenario and are optimistic about us being able to tame and control the immense power that AI will unleash.2. DystopianMost of the future scenarios represented in culture, science fiction, theatre or movies, is dystopian, from the Prometheus myth, to Frankenstein and on to Hollywood movies. Technology is often framed as an existential threat and in some cases, such as nuclear weapons and the internal combustion engine, with good cause. Many calculate that the exponential rate of change will produce AI within decades or less, that poses a real existential threat. Stephen Hawking, Elon Musk, Peter Thiel and Bill gates have all heightened our awareness of the risks around AI.3. Winter is comingThere have been several AI winters, as the hyperbolic promises never materialise and the funding dried up. From 1956 onwards AI has had its waves of enthusiasm, followed by periods of inaction, summers followed by winters. Some also see the current wave of AI as overstated hype and predict a sudden fall or realisation that the hype has been blown up out of all proportion to the reality of AI capability. In other words, AI will proceed in fits and starts and will be much slower to realise its potential than we think.4. Steady progressFor many, however, it would seem that we are making great progress. Given the existence of the internet, successes in machine learning, huge computing power, tsunamis of data from the web and rapid advances across abroad front of applications resulting in real successes, the summer-winter analogy may not hold. It is far more likely that AI will advance in lots of fits and starts, with some areas advancing more rapidly than others. We’ve seen this in NLP (Natural Language Processing) and the mix of technologies around self-driving cars. Steady progress is what many believe is a realistic scenario.5. Managed progressWe already fly in airplanes that largely fly themselves and systems all around us are largely autonomous, with self-driving cars an almost certainty. But let us not confuse i[...]



New evidence that ‘gamification’ does NOT work

2017-07-12T08:46:53.593+00:00

Gamification is touted as new and a game changer.  Full of hyperbolic claims about its efficacy, it's not short of hyperbolic claims about increasing learning. Well it's not so new, games have been used in learning forever, from the very earleist days of computer based learning, but that’s often the way with fads, people think they’re doing ground-breaking work, when it’s been around for eons.At last we have a study that actually tests ‘gamification’ and its effect on mental performance, using cognitive tests and brain scans. The Journal of Neuroscience has just published an excellent study, in a respected, peer reviewed Journal, with the unambiguous title, ‘No Effect of Commercial Cognitive Training on Neural Activity During Decision-Making’ by Kable et al. Gamfication has no effect on learningThe researchers looked for change behaviour in 128 young adults, using pre- and post testing, before and after 10 weeks of training on gamified brain training products (Lumosity), commercial computer games and normal practice. Specifically they looked for improvements in in memory, decision-making, sustained attention or ability to switch between mental tasks. They found no improvements. “We found no evidence for relative benefits of cognitive training with respect to changes in decision-making behaviour or brain response, or for cognitive task performance.”What is clever about the study is that three groups were tested:1. Gamified quizzes (Lumosity)2. Simple computer games3. Simple practiceAll three groups, were found to have the ‘same’ level of improvement in tasks, so learning did take place but the significant word here is ‘same’, showing that brain games and gamification had no special effect. Note that the Lumosity product is gamification (not a learning game), as it has gamification elements, such as Lumosity scores, speed scores and so on, and is compared with the other two groups, one which is 'game-based' learning, controlled against a third non-gamified, non-game practice only group. One of the problems here is the overlap between gamification and game-based learning. They are not entirely mutually exclusive, as most gamification techniques have pedagogic implications and are not just motivational elements.The important point here, is the point made by the 69 scientists who orginally criticised the Luminosity product and claims, that any activity by the brain can improve performance but that does not give gamification an advantage. In fact, the cognitive effort needed to master and play the 'game' components may take more overall effort that other, simpler methods of learning.Lumosity have formLumosity are no strangers to false claims, based on dodgy neuroscience, and were fined $2m in 2015 for claiming that evidence of neuroplasticity, supported their claims on brain training. There is perhaps no other term in neuroscience that is more overused or misunderstood than 'neuroplasticity' as it is usually quoted as an excuse for going back to the old behaviourist 'blank slate' model of cogntion and learning. Luminosity, and many others, were making outrageous claims about halting dementia and Alzheimer’s disease. Sixty seven senior psychologists and neuroscientists blasted their claim and the Federal Trade Commission swung into action. The myth was literally busted.Pavlovian gamificationI have argued for some time that the claims of gamification are exaggerated and this stud[...]



Fractious Guardian debate: Tech in schools – money saver or waster

2017-06-17T10:46:17.878+00:00

7 reasons why ‘teacher research' is a really bad ideaThe Guardian hosted an education debate last night. It was pretty fractious, with the panel split down the middle and the audience similarly split. On one side lay the professional lobby. who saw teachers as the only drivers of tech in schools, doing their own research and being the decision makers. On the other side were those who wanted a more professional approach to procurement, based on objective research and cost-effectiveness analysis. What I heard, was what I often hear at these events, that teachers should be the researchers, experimenters, adopting an entrepreneurial method, making judgements and determining procurement. I challenged this - robustly. Don’t teachers have enough on their plate, without taking on several of these other professional roles? Do they have the time, never mind the skills, to play all of these roles? (Thanks to Brother UK for pic.)1. Anecdote is not research To be reasonably objective in research you need to define your hypothesis, design the trial, select your sample, have a control, isolate variables and be good at gathering and interpreting the data. Do teachers have the time and skills to do this properly? Some may, but the vast majority do not. It normally requires a post-graduate degree (not in teaching) and some real research practice before you become even half good at this. I wouldn’t expect my GP to mess around with untested drugs and treatments with anecdotal evidence based on the views of GPs. I want objective research by qualified medical researchers. En passant, let me give a famous example.Learning styles (VAK or VARK) were propulgated by Neil Fleming, a teacher, who based it on little more than armchair theorising. It is still believed by the majority of teachers, desoite oodles of evidence to the contrary. This is what happens when bad teacher research spreads like a meme. It is believed because teachers rely on themselves and not objective evidence.2. Not in job descriptionBeing a ‘researcher’ is not in the job description. Teaching is hard, it needs energy, dedication and focus. By all means seek out the research and apply what is regarded as good practice, but the idea that good practice is what any individual deems it to be through their personal research is a conceit. A school is not a personal lab – it has a purpose.3. Don’t experiment on other people’s childrenThere is also the ethical issue of experimenting on other people’s children. I, as a parent, resent the idea that teachers will experiment on my children. I assume they’re at school to learn, not be the subject of the teachers' ‘research’ projects in tech.4. Category mistakeWhat qualifies a teacher to be a researcher? It’s like the word ‘Leader’, when anyone can simply call themselves a leader, it renders the word meaningless. I have no problem with teachers seeking out good research, even making judgements about what they regard as useful and practical in their school, but that’s very different from calling yourself a ‘researcher' and doing ‘research’ yourself. That’s a whole different ball park. This is a classic category mistake, shifting the meaning of a word to suit an agenda.5. EntrepreneurialThis word came up a lot. We need more start-ups companies in schools. Now that’s my world. I’m an investor, run an EdTech start-up, and, believe me, that’s the last thing [...]