Subscribe: Donald Clark Plan B
http://donaldclarkplanb.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Donald Clark Plan B

Donald Clark Plan B



What is Plan B? Not Plan A!



Updated: 2017-04-27T13:10:34.657+00:00

 



Snapchat’s smart pivot into an AR company but is AR ready for learning?

2017-04-24T20:47:56.811+00:00

Augmentation, in ‘augmented’ reality, comes in all shapes, layers and forms, from bulky headsets and glasses to smartphones. At present the market has been characterised by a mix of expensive solutions (Hololens), failures (Google Glass, Snap Spectacles) and spectacular successes (Pokemon Go, Snapchat Filters). So where is all of this going?SnapchatSnapChat has pivoted, cleverly, into being not just another messenger service, but the world’s largest Augmented Reality company. Its ‘filters’, that change every day, use face recognition (AI) and layered graphics to deliver some fun stuff and more importantly, advertising. It is a clever ploy, as it plays to the personal. You can use fun filters, create your own filter with a piece of dynamic art or buy one. It’s here that they’re building an advertising and corporate business on designed filters around events and products. That’s smart and explains why their valuation is stratospheric. Once you play around with Snapchat, you get why it’s such a big deal. As usual, it’s simple, useful, personal and compelling. With over 150 million users and advertising revenue model, that works on straight ads, sponsored filters and sponsored lenses (interactive filters), it has tapped into a market that simply never existed.Snap SpectaclesSnap Spectacles was their interesting foray into the glasses augmented market – but more of a gimmick than realistic consumer product. Targeted only at Snapchat users, you can’t really wear them with regular glasses and all they do is record video – but, to be fair, they do that well. However, as with Google Glass, you feel like a bit of a twat. Not really a big impact product.HololensWith its AI driven interfaces – point head, gesture or voice recognition, it is neat but at $3000 a pop – not really a commercial proposition for Microsoft. As for the ‘experience’, the limited rectangle, that is the field of view, is disappointing, and ‘killer’ applications absent. There have been games, Skype applications, 2D & 3D visualisations but nothing yet that really blows the mind – forget the idea of Sci-fi holograms, it’s still all a bit ‘Pepper’s Ghost’ in feel, still tethered and has a long way to go before being a viable product.https://www.microsoft.com/en-gb/hololensMagic LeapBit of a mystery still as they are a secretive lot. Despite having raised more than $1.4 billion from Google, Alibaba and Andreessen Horowitz, it has still to deliver whatever it is that they want to deliver. Mired in technical problems, they may still pull something out of the bag with their glasses release this year – but it seems you have to wear a belt with some kit attached. Watch this space, as they say, as it is nothing but an empty space for now.Pokemon GoWe saw the way this market was really going with Pokemon Go, layers of reality on a smartphone. Photographic Camera layer, idealised graphic map layer, graphic Pokemon layer, graphic Pokestops layer, GPS layer, internet layer – all layered on to one screen into a compelling social and games experience. Your mind simply brings them altogether into one conscious, beautiful, blended reality – more importantly, it was fun. This may be where augmented reality will remain until the minaturisation of headsets and glasses get down to the level of contact lenses.AR v VRI still prefer the full punch VR immersive experience. AR, in its current form is a halfway house experience. The headset and glasses seem like a stretch for all sorts of reasons. You simply have to ask yourself, do I need all of this cost and equipment, to see a solar system float in space, when I can see it in 3D on my computer screen? There are clearly many situations in which one would want to ‘layer’ on to reality but in many learning situations, there may be simpler solutions.LearningSo let’s look at specific learning outcomes that could be delivered and enhanced by Augmented Reality. 1. ExplanationsExplanations, causes, rules, processes… delivered as text, audio, 2D, 3D images in physics, chemistry, b[...]



AI fail that will make you gag with disgust……

2017-04-21T20:30:51.785+00:00

One of the first consumer robots were vacuum cleaners that bumped around your floors sucking up dirt and dust. The first models simply sensed when they hit something, a wall or piece of furniture, turned the wheels and headed off in a different direction.The latest vacuum robots actually map out your room and mathematically calculate the optimum route to evenly vacuum the floor. They have a memory, build a mathematical model of the room, with laser mapping, 360 degree cameras, can detect objects in real time and have clever corner cleaning capability.  They move room to room, can be operated from a mobile app - scheduling and so on. They will even automatically recharge when their batteries get low and resume from the point they left. Very impressive.That’s not to say they’re perfect. Take this example, that happened to a friend of mine. He has a pert dog and sure enough, the vacuum cleaner would bump into the dog on the carpet, turn and move on. The dog was initially puzzled, sniffed it a bit, but learned to ignore this new pet, as something beneath his contempt as top-dog. Cats even like to sit on them and take rides around the house.Then, one day, the owner came back, opened his front door and was hit by a horrific wall of smell.  The dog had taken a dump and the robot cleaner had smeared the shit evenly across the entire carpet, even into the corners, room by room, with a mathematical exactitude that was superior to that of any human cleaner. The smell was overwhelming and the clean up a Herculean task on hands and knees, accompanied by regular gagging. The lesson here is that AI is smart, can replace humans in all sorts of tasks but doesn’t have the checks and balances of normal human intelligence. In fact the attribution of the word intelligence, I'd argue (and have here), is an anthropomorphic category error, taking one category and applying it in a separate and compeltely different domain. It’s good at one thing, or a few things, such as moving, mapping and sucking, but it doesn’t know when the shit hits the fan.[...]



Do bears shit in the woods? 7 reasons why data analytics a misleading, myopic use of AI in HE

2017-04-02T17:22:58.953+00:00

I’m increasingly convinced that HE is being pulled in the wrong with its obsession with data analytics, at the expense of more fruitful uses of AI in learning. Sure it has some efficacy but the money being spent at present, may be mostly wasted.1. Bears in woodsMuch of what is being paid for here is what I’d say was answers to the question, ‘Do bears shit in the woods?’ What insights are being uncovered here? That drop-out is being caused by poor teaching and poor student support? That students with English as a second language struggle? Ask yourself whether these insights really are insights or whether they’re something everyone knew in the first place.2. You call that data?The problem here is the paucity of data. Most Universities don’t even know how many students attend lectures (few record attendance), as they’re scared of the results. I can tell you that the actual data, when collected, paints a picture of catastrophic absence. That’s the first problem – poor data. Other data sources are similarly flawed, as there's little in the way of fine-grained feedback. It's small data sets, often messy, poorly structured and not understood.3. Easier waysMuch of this so-called use of AI is like going over top of head with your right hand to scratch your left ear. Complex algorithmic approaches are likely to be more expensive and far less reliable and verifiable than simple measures like using a spreadsheet or making what little data you have available, in a digestible form, to faculty.4. Better uses of resourcesThe problem with spending all of your money on diagnosis, especially when the diagnosis is an obvious limited set of possible causes, that were probably already known, is that the money is usually better spent on treatment. Look at improving student support, teaching and learning, not dodgy diagnosis.5. Action not analyticsIn practice, when those amazing insights come through, what do institutions actually do? Do they record lectures because students with English as a foreign language find some lecturers difficult and the psychology of learning screams at us to let students have repeated access to resources? Do they tackle the issue of poor teaching by specific lecturers? Do they question the use of lectures? (Easily the most important intervention, as the research shows)  is the shift to active learning. Do they increase response times on feedback to students? Do they drop the essay as a lazy and monolithic form of assessment? Or do they waffle on about improving the ‘student experience’ where nothing much changes?6. EvaluationI see a lot of presentations about why one should do data analytics  - mostly around preventing drop-out. I don’t see much in the way of verifiable analysis that data analytics has been the actual causal factor in preventing future drop-out. I mean a cost-effectiveness analysis. This is not easy but it would convince me,7.  Myopic view of AIAI is many things and a far better use of AI in HE, is, in my opinion, to improve teaching through personalised, adaptive learning, better feedback, student support, active learning, content creation and and assessment. All of these are available right now. They address the REAL problem – teaching and learning.ConclusionTo be fair I applaud efforts from the likes of JISC to offer a data locker, so that institutions can store, share and use bigger data sets. This solves some legal problems but looks at addressing the issue of small data. But this is, as yet, a wholly unproven approach. I work in AI in learning, have an AI learning company, invest in AI EdTech companies, am on the board of an AI learning company, speak on the subject all over the world, write constantly on the subject . You’d expect me to be a big fan of data analytics in HE – I’m not. Not yet. I’d never say never but so much of this seems like playing around with the problem, rather than facing up to solving the problem.[...]



AI is the new UI: 7 ways AI shapes your online experience

2017-02-26T15:29:44.572+00:00

HAL stands for ‘Heuristically programmed ALgorithmic computer’. Turns out that HAL has become a reality. Indeed we deal with thousands of useful HALs every time we go online. Whenever you are online, you are using AI. As the online revolution has accelerated, the often invisible application of AI and algorithms has crept into a vast range of our online activities. A brief history of algorithms includes the Sumerians, Euclid, the origins of the term (Al Khwarismi), Fibonacci, Leibniz, Gauss, Laplace, Boole and Bayes but in the 21st century ubiquitous computing and the internet has taken algorithms into the homes and minds of everyone who uses the web.You’re reading this from a network, using software, on a device, all of which rely fundamentally on algorithms and AI. The vast portion of the software iceberg that lies beneath the surface, doing its clever but invisible thing, the real building blocks of contemporary computing – are algorithms and AI. Whenever you search, get online recommendations, engage with social media, buy, do online banking, online dating, see online ads; algorithms are doing their devilishly clever work.BCG’s ten most innovative companies 2016Boston Consulting Group publish this list every year:AppleGoogleTeslaMicrosoftAmazonNetflixSamsungToyota FacebookIBMNote how it is dominated by companies that deliver access and services online. Note that all, apart perhaps from Toyota, are turning themselves into AI companies. Some, such as IBM, Google and Microsoft have been explicit on this strategy. Others, such as Apple, Samsung, Netflix and Facebook have been acquiring skills and have huge research resources in AI. Note also that Tesla, albeit a car company, is really an AI company. Their cars are always on, learning robots. We are seeing a shift in technology towards ubiquitous AI.1. SearchWe have all been immersed in AI since we first started using Google. Google is AI. Google exemplifies the success of AI in having created one of the most successful companies even on the back of AI. Beyond simple search, they also enable more specific AI-driven search through Google Scholar, Google Maps and other services. Whether it is documents, videos, images, audio or maps, search has become the ubiquitous mode of access. AI is the real enabler when it comes to access. Search Engine Indexing finds needles in the world’s biggest haystack. Search for something on the web and you’re ‘indexing’ billions of documents and images. Not a trivial task and it needs smart algorithms to do it at all, never mind in a tiny fraction of a second. PageRank was the technology that made Google one of the biggest companies in the world. Google has moved on, nevertheless, the multiple algorithms that rank results when you search are very smart. We all have, at our fingertips, the ability to research and find the things that only a tiny elite had access to only 20 years ago.2. RecommendationsAmazon has built the world’s largest retail company with a raw focus on the user experience, presented by their recommendation engine. Their AI platform, Alexa, now delivers a range of services but it was made famous by its recommendations on first books, now other goods. But recommendation engines are now everywhere on the web. You are more often than not presented with choices that are pre-selected, rather than the result of a search. Netflix is a good example, where the tiling is tailored to your needs. Most social media feeds are now AI-driven, as are many online services, where, what you (and others) do, determines what you see.3. CommunicationsSiri, VIV, Cortana, Alexa… voice recognition, enabled by advanced in AI through Natural Language Programming, has changed the way we communicate with technology. As speech is our natural form of communication, it is a more natural interface, giving significant advantages in some contexts. We are now in a position of seeing speech recognition move from being a topic of research to real commercial applic[...]



7 myths about AI

2017-02-13T16:58:25.448+00:00

Big Blue beat Kasparov in 1997 but chess is thriving. AI remains our servant not master. Yet mention AI and people jump to the far end of the conceptual spectrum with big-picture, dystopian and usually exaggerated visions of humanoid robots, the singularity and the existential threat to our species. This is fuelled by cultural messages going back to the Greek Prometheus myth, propelled by Mary Shelly’s Frankenstein (subtitled The Modern Prometheus) to nearly a century of movies from Metropolis onwards that portray created intelligence as a threat. The truth is more prosaic.Work with people in AI and you’ll quickly be brought back from sci-fi visions to more practical matters. Most practical and even theoretical AI is what is called ‘weak’ or ‘narrow’ AI. It has no ‘cognitive’ ability. I blame IBM and the consultancies for this  hyperbole. There is no ‘consciousness’. IMBs Watson may have beaten the Jeopardy Champions, Google’s AlphaGO may have beaten the GO Champion – but neither knew they had won.The danger is that people over-promise and under-deliver, so that there's disappointment in the market. We need to keep a level head here and not see AI as the solution to everything. In fact, many problems need far simpler solutions.AI is the new UIAI is everywhere. You use it every day when you use Google, Amazon, social media, onine dating, Netflix, music streamming services, your mobile and any file  you create, store or open. Our online experineces are largely of AI driven services. It's just not that visible. AI is the new UI. However, there are several things we need to know about AI if we are to understand and use it well in our specific domain, and in this case it is teaching and learning.AI is ‘intelligent’AI is all about the brainAI is consciousAI is strongAI is generalAI is one thingAI doesn’t affect me 1. AI is not ‘intelligent’I have argued that the word ‘intelligent’ is misleading in AI. It pulls us toward a too anthropomorphic view of AI, suggesting that it is ‘intelligent’ in the sense of human intelligence. This, is a mistake as the word ‘intelligence is misleading. It is better to see AI in terms of general tasks and competences, not as being intrinsically intelligent, as that word is loaded with human preconceptions.http://donaldclarkplanb.blogspot.co.uk/search?q=Why+AI+needs+to+drop+the+word+‘intelligence’2. AI is not about the brainAI is coded and as such, most of it does not reflect what happens in the human brain. Even the so-called ‘neural network’ approach is loosely modelled on the networked structure of the brain. It is more analogy that replication. It’s a well work argument but we did not learn to fly by copying the flapping of birds’ wings and we didn’t learn to go faster by copying the legs of a cheetah – we invented the wheel. Similarly with AI.3. AI is not cognitiveIBMs marketing of AI as ‘cognitive technology’ is way off. Fine if they mean it can perform or mimic certain cognitive tasks but they go further, suggesting that it is in many senses ‘cognitive’. This is quite simply wrong. It has no consciousness, nor real general problem solving abilities, none of the many cognitive qualities of human minds. It is maths. This is nit necessarily a bad thing, as it is free from forgetting, racism, sexism, cognitive biases, doesn’t need to sleep, networks and doesn’t die. In other words AI is about doing things better than brains but  by other means.4. AI is weakThere is little to fear from threatening independence and autonomy in the short term.  Almost all AI is what is called ‘weak’ AI, programmes, run on computers that simulate what humans can do. Strong AI is the idea that it actually does what the brain does. My own view is that we are very firmly at the ‘weak’ stage but that the distinction is actually on a spectrum, like cool to hot. That’s not to say that ‘strong[...]



Elon Musk – the Bowie of Business

2017-01-24T18:45:56.696+00:00

Having just finished Morley’s brilliant biography of Bowie, it struck me that Musk is the Bowie of business. Constantly reinventing himself; Paypal hero, Tesla road warrior, Solar City sungod, Starman with Space X and now the sci-fi Hyperloop hipster- and he’s still only in his forties. Strange fact this but the first Tesla car was codenamed DarkStar.But let’s not stretch Bowies leg warmers too far. Ashlee Vance’s biography of Elon Musk is magnificent for mostly other reasons. It’s about Musk the man, his psychology. There’s a manic intensity to Musk, but it’s directed, purposeful and, as Vance says, it’s not about making money. Time and time again he puts everything he’s made into the next, even weirder and riskier project. Neither is he a classic business guy or entrepreneur. For him questions come first and everything he does is about finding answers. He despises the waste of intellect that gets sucked into the law and finance, as he’s a child of the Enlightenment and sees as his destiny the need to accelerate progress. He doesn’t want to oil the wheels, he wants to drive, foot to the metal, the fastest electric car ever made then ride a rocket all the way to Mars. As he says, he wants to die there – just not on impact. Always on the edge of chaos, like a kite that does its best work when it stalls and falls but then it soars.Time and time again experience tells me, and I read, about actual leaders who bear no resemblance to the utopian model presented by the bandwagon ‘Leadership’ industry. The one exception is Stanford’s Pfeffer, who also sees the leadership industry as peddling unreal, utopian platitudes. Musk has a string of business successes behind him, including PayPal, and is the major shareholder in three massive, public companies, all of which are innovative, successful and global. He has taken on the aerospace, car and energy industries at breathtaking speed, with mind-blowing innovation. Yet he is known to be mercurial, cantankerous, eccentric, mean, capricious, demanding, blunt, delivers vicious barbs, swears like a trooper, takes things personally, lacks loyalty and has what Vance calls a ‘cruel stoicism’ –all of these terms taken from the book. He demands long hours and devotion to the cause and is cavalier in firing people. “Working at Tesla was like being Kurtz in Apocalypse Now”. So, for those acolytes of ‘Leadership’ and all the bullshit that goes with that domain, he breaks every damn rule – then again so do most of them – in fact that’s exactly why they succeed. They’re up against woozies who believe all that shit about leading from behind. So why are people loyal to him and why does he attract the best talent in the field? Well, he has vision. He also has a deep knowledge of technology, is obsessive about detail, takes rapid decisions, doesn’t like burdensome reports and bureaucracy, likes shortcuts and is a fanatic when it comes to keeping costs down. Two small asides – he likes people to turn up at the same time in the morning and hates acronyms. I like this. His employees are not playing pool or darts mid-morning and don’t lie around being mindful on brightly coloured bean bags. It’s relentless brainwork to solve problems against insane deadlines. You may disagree but he does think that it is only technology that will deliver us from climate change, the dependence on oil and allow us to inhabit planets other than our own and his businesses form a nexus of energy production, storage and utilisation that, he thinks, will save our species. He may be right. [...]



Corbyn is right - proud that my fat cat pay campaign against the CIPD paid off

2017-01-14T14:12:58.359+00:00

Corbyn has come under attack for his wibbly-wobbly performance on top pay. Oh how some liberals I know, who pretend to care about the poor, mocked him on social media. But for me, he was on the money. To my mind, this needs urgent attention. First use the lever of public funding and contracts, second use ratios (around 20:1 with checks on range), along with caps.The blog is mightier than the swordNot a lot of people know this (to be said in Michael Caine accent) but in 2010 I was single-handedly responsible for slashing fat cat pay in a major institution, through blogging. It was the CIPD. I read the accounts and pointed out that the CEO salary was just short of £500k. Not bad for an organisation whose commercial revenue had plummeted (down 23%), research contracted and ridiculed (down 57% & report pulled), magazine imploded (down 83%), investment returns bombed (down 74.7%) and a membership who were angry and alienated about a command and control culture that left them with less services and starved of cash. There was also the issue of an odd and overpriced acquisition (Bridges Communications) and a blatant falsehood on her CV on the CIPD website. It seemed outrageous that the leading organisation in 'personnel', that often opines on exective pay, should be taking such liberties. You can read this in detail here.This caused a shitstorm. The CIPD Chair chipped in to defend the claim but it simply exposed the fact that the fat cat stuff had been going on for years, when it was shown that the previous CEO, Geoff Armstrong, had also eared a cool half a million. You can read the ludicrous defence here.What happened next was comical. Personnel Today picked up on my Jackie Orme story and laid out the case with a link to my blog along with an official response from the CIPD and got a survey going (see story here). Does CIPD CEO deserve £87,000 bonus? Result : NO 94%, YES 6%! Pretty conclusive and things changed very quickly. To cut a long story short, the current CEO of the CIPD, Peter Cheese, earned £250k last year. Result.This is one of the reasons I never ever joined the CIPD and never will. I have a healthy distruct of membership organisations that usually turn into not serving their emmbers but the staff of the institution itself. Needless to say, from that day the CIPD has never invited me to any event or conference or involved me in any project. It was worth it. Call them out - it can work as they hate the publicity. The moral of this story, is to use the power of the pen to attack these people personally and the Remuneration committees that support these extortionate salaries (and bonuses) (and other benefits). Believe me, it is extortion. I’ve been on these boards. It is literally extortion from the public purse.UniversitiesThe first target should be the Universities. The pay at the top has sprinted ahead of the pack. Last year the Russell group got an average 6% pay rise taking their average annual package to an average of £366,000. All of this on the back of the widespread and indefensible exploitation of part-time and low paid teaching staff. This is a disgrace. There’s also the issue of minimum wages right at the bottom. Don’t imagine for one minute that academe is in any sense a beacon of equality or morals. They’re rapacious at the top. Given that they receive huge amounts of public money, a large chunk through student fees, which is in effect government backed loans, which they are not responsible for collecting, we have an easy lever here. Get those ratios working or your funding gets questioned.CharitiesI’ve also been a Trustee on some very big educational charities. They pull every trick in the book. On the whole these are low growth high reward environments, where bonuses are awarded for the merest fart of effort. If the law changes on pensions, they’ll simply give top-up cash awards to compensate the CEO. Imagine doing that fo[...]



The future of parenthood through AI – meet Aristotle Mattel's new parent bot

2017-02-10T23:01:12.263+00:00

Parents obviously play an important role in bringing up and educating their children. But it’s fraught with difficulties and pitfalls. Parents tend to face this this, for the first time, without much preparation, and most would admit to botching at least some of it along the way. Many parents may work hard and don’t have as much time with their children as they’d like. Few escape from the inevitable conflicts over expectations, homework, diet, behavior and so on. So what role could AI play in all this?AI wedgeDomestic broadband was the first edge of the wedge. Smartphones, tablets and laptops were suddenly in the hands of our children, which they lapped up with a passion. Now with with the introduction of smart, voice activated devices into the home, a new proxy parent may have arrived, devices that listen understand and speak back, even perform tasks.Enter AristotleEnter Aristotle, Mattel’s $300 Aristotle assistant. They may have called it Aristotle as both his parents died when he was young, that he was the able teacher of Alexander the Great or, that Aristotle set the whole empirical, scientific tradition that led to AI going. To be honest, what’s far more likely, is that it sounds Greek, classical and authoritative. (Aristotle's view on education here).It’s a sort of Amazon Echo or Google Home for kids, designed for their bedrooms. To be fair, the baby alarm has been around for a long time, so tech has been playing this role in some fashion, for a some time, largely giving parents peace of mind. It is inevitable that such devices get smarter. By smart, I mean several things. First it uses voice, to both listen and respond. That’s good. I’ve noticed, in using Amazon Echo, how much I’ve had to speak carefully and precisely to get action (see my thoughts on Echo here). There may come a time when early language development, which we know is important in child development, could be enhanced by such AI companions. It may also encourage listening skills. Secondly, it may encourage and satisfy curiosity. These devices are endlessly patient. They don’t get tired, grumpy, are alert and awake 24/7 and will get very smart. Thirdly, they may enhance parenthood in ways we have yet to imagine.ChildOne aspect of the technology that does appeal is its personalized voice recognition. It knows the child’s voice. This could be useful. One area that could lessen embarrassment on both sides is timely sex education and advice. This could satisfy the child’s natural curiosity without the angst that child-parent communications could involve, as long as the child knows it is confidential and the parent is in control. As the child gets older, got a dispute over a fact? Amazon Echo or an Aristotle, may sort it out. Stuck with your homework, these devices will inevitably be able to help. There’s already an app, Photomaths, the app that students love and teachers hate, that you simply point at a mathematics problem, and it not only gives you the answer but all the steps in between. Few parents would be able to do this. Similarly with other subjects and languages. There’s no reason why the knowledge of the parent should limit the ability of a child to learn. The important thing is not to let such devices become substitutes for the horrific Tiger Mom experiences, hot-housing kids with endless exercises. Learning could be done in a measured fashion. And what parent wouldn’t want such a device to become an alarm, especially on school days?ParentThe Arostotle device is designed to allow you to track feeds, wet nappies and so on, even buy the necessaries. What could also be useful is the availability of a source for good advice on parenting. I can still remember the times when one of my kids got ill - the sheer panic and worry. We had twins and would have loved good advice, mainly on what not to do – like tell the one who came [...]



First flirtations with Alexa (Amazon Echo)…. 10 interesting uses in learning

2017-01-28T16:31:05.640+00:00

I’ve spent the last few years working with AI, with WildFire and CogBooks, but here it is, in my home folks, on my kitchen table. First impressions? Looks good, a Space Odyssey-like column in black and my first encounter is quietly weird – a moving circle of blue light, a HAL-like halo appears, then a female voice. I say 'Hello', she says 'Hi'. I’m non-plussed by the gender voice farrago – a sort of ‘damned of you don’t, damned if you do’ argument. What I’m really interested in, is whether it is of any use. So a few ‘get to know you’ questions seemed appropriate.Where are you?I’m here and my head is in the cloud.Where do you come from?I am designed by Amazon. They’re based in Seattle, Washinton.Are you a man or a woman?I’m female in character.Volume was a little low so… Turn up volume. She did!How tall are you?I’m just over 232 millimeters.How wide are you?My diameter is 7.6 centimeters.What weight are you?I’m just over 1 kilogram.What do you like?I like to read.How do you feel today?I’m very well thanks.Ok, let’s up the ante – everybody does this and I have to say, it felt a little  transgressive… I sware at her....That’s not very nice to say.Ok, tell me a jokeWhat does a house wear? A dress.Several fairly anodyne jokes later….OK, enough of the small talk…First up… let’s not to compare Alexa to a human. It’s all too easy to do the ‘but she can’t do this… or that…’ thing. I’m not looking for a life companion, or a friend – I want to see if she’s useful. This is the first time I’ve used voice recognition in anger, woven into my life, so I’m keen to focus, not on problems but potential. So far, the voice recognition is damn good. I have a strong accent, that doesn’t throw her, and variations on the phrasing of questions seem to work (not always). There's a real problem with near-sounding homophones, but you learn to be more precise in your pronunciation. Next line of enquiry, ‘time’.TimeYou can ask it the time or date, even holiday dates, number of days until a holiday and so on.  The sort of practical stuff we all need.What time is it?Bang on.What date is it?Day of the week and date.When is Burn’s Night?Burns Night will be on Wednedsay 25 January 2017.How many days to Burn’s Night?There are 19 days until Burn’s Night.The timer functions are also neat, as these are often annoyingly fiddly on your cooker or alarm clock. How often do you pop something in the oven and either ‘look to check’ or suddenly smell the charred remains? Set a timer for 10 minutesSet a second timer for 20 minutesHow much time is left on my timer?Then there are the alarm functions.Set alarm for 7.30 tomorrow morning.All good, just ask, it confirms the time – done. Beyond this, she integrates with Google Calendar, reminding you of what you have to do today, tomorrow…To do listsTo do lists are neat. I use a small notebook but for household stuff, a shopping list or to do list in the kitchen is neat. We can all add to the list. My gut feel, however, is that this will go the way of the chalkboard – unloved and unused.OK, let’s pause, as future uses are starting to emerge….Use 1 – Work and Personal AssistantOnly a start but I can already see this being used in organisations, sitting on the meeting room table, with alarms set for 30 mins, 45 mins and five mins, in an hour long meeting. Once fully developed, it could be an ideal resource in meetings for company information – financial and otherwise.In fact, it struck me just playing around with these functions, that Alexa, as it evolves, will eventually make an ideal PA. Managers, according to a recent Harvard Business Review survey, spend 57% of their time on admin. Room for improvement there I think and an admin assistant seems likely. I've written a much longer piece on AI and[...]



AI isn’t a prediction, it’s happening. How fast - what do the experts say?

2017-01-01T17:36:12.703+00:00

AI predictions have been notoriously poor in the past. Speculation that machines will transcend man has been the subject of speculative thought for millennia. We can go all the way back to the Prometheus Bound, y Aeschylus, where the God Prometheus is shackled to a rock, his liver eaten by an eagle for eternity. His crime? To have given man fire, and knowledge of writing, mathematics, astronomy, agriculture, and medicine. The Greeks saw this as a topic of huge tragic import – the idea that we had the knowledge and tools to challenge the Gods. It was a theme that was to recur, especially in the Romantic period, with Goethe, Percy Blyth Shelly and Mary Shelley, who called her book Frankenstein ‘The Modern Prometheus’. Remember that Frankenstein is not the created monster but his creator.Prometheus myth, in one of the oldest Greek tragedies we have, In many ways the rate of prediction is still largely in this romantic tradition – one that values idle thought  and commentary over reason.There is something fascinating about prediction in AI, as that is what the field purports to do – the predictive analytics embedded in consumer services, such as Google, Amazon and Netflix have been around for a long time. Their invisible hand has been guiding your behaviour, almost without notice. So what doe the field of AI have to say about itself? Putting aside Kurweil’s (2005) singularity, as being too crude and singularly utopian, there have been some significant surveys of experts in the field.Survey 1: 1973One of the earliest surveys was with 67 AI experts in 1973, by the famous AI researcher Donald Mitchie. He asked how many years it would be until we would see “computing exhibiting intelligence at adult human level”.As you can see there was no great rush towards hype at that time. Indeed, the rise in 1993 turned out to be reasonable, as by that time there had been significant advances and 2023 and beyond, seems not too unreasonable, even now.Survey 2: 2006Jumping to Moor (2006), a survey was taken at Dartmouth College, on the 50th anniversary of the famous AI conference organised by John McCarthy in 1956, where the modern age of AI started:Again, we see a considerable level of scepticism, including substantial numbers of complete sceptics who answered 'Never' to these questions.Survey 3: 2011Baum et al. (2011) took the Turing Test as their benchmark, and surveyed on the ability to pass Turing tests with the following results: 50% probability:   Third grade - 2030  Turing test - 2040  Nobel research – 2045Given the fact that the Todai project got a range of AI techniques to pass the Tokyo University entrance exam, this may seem like an underestimate of progress.Survey 4: 2014The most recent, serious attempt, where all of the above data was summarised of you want more detail, was by Muller and Bostrum (2014). Their conclusion, based on surveying four groups of experts, was:“The median estimate of respondents was for a one in two chance that high- level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.”So still some way off, 50:50 on around 30 years and getting more certain for 50 years hence.A particularly interesting set of predictions in this survey was around the areas of probable advances:Cognitive science 47.9% Integrated cognitive architectures 42.0% Algorithms revealed by computational neuroscience 42.0% Artificial neural networks 39.6% Faster computing hardware 37.3% Large-scale datasets 35.5% Embodied systems 34.9% Other method(s) currently completely unknown 32.5% Whole br[...]



2016 was the smartest year ever

2016-12-31T00:29:42.011+00:00

For me, 2016 was the year of AI. It went from an esoteric subject to a topic you’d discuss down the pub. In lectures on AI in learning around the world in New Zealand, Australia, Canada, US, UK and around Europe, I could see that this was THE zeitgeist topic of the year. More than this, things kept happening that made it very real….1. AI predicts a Trump winOne particular instance was a terrifying epiphany. I was teaching on AI at the University of Philadelphia, on the morning of the presidential election, and showed AI predictions which pointed to a Trump win. Oh how they laughed – but the next morning confirmed my view that old media and pollsters were stuck in an Alexander Graham Bell world of telephone polls, while the world had leapt forward to data gathering from social media and other sources. They were wrong because they don’t understand technology. It’s their own fault, as they have an in-built distaste for new technology, as it’s seen as a threat. At a deeper level, Trump won because of technology. The deep cause ‘technology replacing jobs’ has already come to pass. It was always thus. Agriculture was mechanised and we moved into factories, factories were automated and we moved into offices. Offices are now being mechanised and we’ve nowhere to go. AI will be the primary political, economic and moral issue for the next 50 years.2. AI predicts a Brexit winOn the same basis, using social media data predictions, I predicted a Brexit win. The difference here, was that I voted for Brexit. I had a long list of reasons - democratic, fiscal, economic and moral – but above all, it had become obvious that the media and traditional, elitist commentators had lost touch with both the issues and data. A bit surprised at the abuse I received, online and face-to-face, but the underlying cause, technology replacing meaningful jobs has come to pass in the UK also. We can go forward in death embrace with the EU or create our own future. I chose the latter.3. Tesla timesI sat in my mate Paul Macilveney’s Tesla (he has one of only two in Northern Ireland), while it accelerated (silently) pushing my head back into the passenger seat. It was a gas without the gas. On the dashboard display I could see two cars ahead and vehicles all around the car, even though they were invisible to the driver. Near the end of the year we saw a Tesla car predict an accident between two other unseen cars, before it happened. But it was when Paul took his hands off the steering wheel, as we cambered round the corner of a narrow road in Donegal, that the future came into focus. In 2016, self-driving cars became real, and inevitable. The car is now a robot in which one travels. It has agency. More than this, it learns. It learns your roads, routes and preferences. It is also connected to the internet and that learning, the mapping of roads, is shared with all as yet unborn cars.4. AI on tapAs the tech giants motored ahead with innumerable acquisitions and the development of major AI initiatives, some even redefining themselves as AI companies (IBM and Google), it was suddenly possible to use their APIs to do useful things. AI became a commodity or utility – on tap. That proved useful, very useful in starting a business.5. OpenAIHowever, as an antidote, to the danger that the tech monsters will be masters of the AI universe, Elon Musk started OpenAI. This is already proving to be a useful open source resource for developers. Its ‘Universe’ is a collection of test environments in which you can run your algorithms. This is a worthy initiative that balances out the monopolising effect of private, black-box, IP-driven AI.6. BreakthroughsThere were also some astounding successes across the year. Google beat a GO champion, the most complex game we know. Time and tim[...]



Brains - 10 deep flaws and why AI may be the fix

2017-02-04T17:41:25.441+00:00

Every card has a number on one side and a letter on the other.If a card has a D then it has a 3 on the other side.What is smallest number of cards you have to turn over to verify whether the rule holds?  D      F       3      7(Answer at end)Most people get this wrong, due to a cognitive weakness we have - confirmation bias. We look for examples that confirm our beliefs, whereas we should look examples that disconfirm our beliefs. This, along with many other biases, is well documented by Kahneman in Thinking: Fast and Slow. Our tragedy as a species is that our cognitive apparatus and, especially our brains, have evolved for purposes different from their contemporary needs. This makes things very difficult for teachers and trainers. Our role in life is to improve the performance of that one organ, yet it remains stubbornly resistant to learning.1. Brains need 20 years of parenting and schoolingIt takes around 16 years of intensive and nurturing parenting to turn them into adults who can function autonomously. Years of parenting, at times fraught with conflict, while the teenage brain, as brilliantly observed by Judith Harris,  gets obsessed with peer groups. This nurturing needs to be supplemented by around 13 years of sitting in classrooms being taught by other brains - a process that is painful for all involved – pupils, parents and teachers. Increasingly this is followed by several years in college or University, to prepare the brain for an increasingly complex world.2. Brains are inattentiveYou don't have to be a teacher or parent for long to realise how inattentive and easily distracted brains can be. Attention is a necessary condition for learning, yet they are so easily distracted.3. Fallible memoriesOur memories are not only limited by the narrow channel that is working memory but the massive failure to shunt what we learn from working to long-term memory. And even when memories get into long-term memory, they are subject to further forgetting, even reconfiguration into false memories. Every recalled memory is an act of recreation and reconstitution, and therefore fallible. Without reinforcement we retain and recall very little. This makes them very difficult to teach.4. Brains are biasedThe brain is inherently biased, not only sexist and racist, it has dozens of cognitive biases, such as groupthink, confirmation bias and many other types of dangerous biases, that shape and limit thought. More than this it has severe weaknesses, not only inherent tendencies, such as motion sickness, overeating, jet-lag, phobias, social anxieties, violent tendencies, addiction, delusions and psychosis. This is not an organ that is inherently stable.5. Brains need sleepOur brains sleep eight hours a day, that’s one third of life gone, down the drain. Cut back on this and we learn less, get more stressed, even ill. Keep the brain awake, as torturers will attest, will drive it to madness. Even when awake, they are inattentive and prone to daydreaming. This is not an organ that takes easily to being on task.6. Brains can’t upload and downloadBrains can’t upload and download. You cannot pass your knowledge and skills to me without a huge amount of motivated teaching and learning. AI can do this in an instant.7. Brains can't network.Our attempts at collective learning are still clumsy, yet AI, collective learning and intelligence is a feature of modern AI.8. Brains can't multitaskThis is not quite true, as they regulate lots of bodily functions, such as breathing, balance and do on, while doing other things. However, brains don;t multitask at the level required for some breakthroughs. What seems like multitasking [...]



Bot teacher that impressed and fooled everyone

2017-02-03T10:44:03.422+00:00

An ever-present problem in teaching, especially online, is the very many queries and questions from students. In the Georgia Tech online course this was up to 10,000 per semester from a class of 350 students (300 online, 50 on campus). It’s hard to get your head round that number but Ashok Goel, the course leader, estimates that it is one year's work for a full time teacher.The good news is that Ashok Goel is an AI guy and saw his own subject as a possible solution to this problem. If he could get a bot to handle the predictable, commonplace questions, his teaching assistants could focus on the more interesting, creative and critical questions. This is an interesting development as it brings tech back to the Socratic, conversational, dialogue model that most see as lying at the heart of teaching.Jill Watson – fortunate errorHow does it work? It all started with a mistake. Her name, Jill Watson, came from the mistaken belief that Tom Watson’s (IBM’s legendary CEO) wife was called Jill - her name was actually Jeanette. Four semesters of query data, 40,000 questions and answers, and other chat data, were uploaded and, using Bluemix (IBM’s app development environment for Watson and other software), Jill was ready to be trained. Initial efforts produced answers that were wrong, even bizarre, but with lots of training and agile software development, Jill got a lot better and was launched upon her unsuspecting students in the spring semester 2016.Bot solutionJill solved a serious problem – workload. But the problem is not just scale. Students ask the same questions over and over again, but in may different forms, so you need to deal with lots of variation in natural language. This lies at the heart of the chatbot' solution -  more natural, flowing, frictionless, Socratic form of dialogue with students. The database therefore, had many species of questions, categorized, and as a new question came in Jill was trained to categorise the new question and find an answer.With such systems it sometimes gets it right, sometimes wrong. So a mirror forum was used, moderated by a human tutor. Rather than relying on memory, they added context and structure, and performance jumped to 97%. At that point they decided to remove the Mirror Forum. Interestingly, they had to put a time delay in to avoid Jill seeming too good. In opractive academics are rather slow at responding to student queries, so thay had to replicate bad performance. Interesting, that in comparing automated with human performance, it wasn;t a metter of living up to expectations but dumbing down to the human level.These were questions about coding, timetables, file format, data usage, the sort of questions that have definite answers. Note that she has not replaced the whole teaching tasks, only made teaching and learning more efficient, scalable and cheaper. This is likely to be the primary use of chatbots in the short to medium term - tutor and learner support. That’s admirable.Student reactionsThe students admitted they couldn’t tell, even in classes run after Jill Watson’s cover was blown – it’s that good. What’s more, they like it, because they know it delivers better information, often better expressed and (importantly) faster than human tutors. Despite the name, and an undiscovered run of 3 months, the original class never twigged. Turing test passed.In Berlin this month, I chaired Tarek Richard Besold, of the University of Bremen, who gave a fascinating talk through some of the actual dialogue between the students and Jill Watson. It was illuminating. The real tutors, who often find themselves frustrated by student queries, sometimes got slightly annoyed and tetchy, as opposed to Jill, who comes in with personal bu[...]



Emotional Intelligence - another fradulent fad

2016-12-24T12:30:15.404+00:00

The L&D hammer is always in search of nails to slam into the heads of employees. So imagine their joy, in 1995, when ‘Emotional Intelligence’ hit HR on the back of Goldman’s book ‘Emotional Intelligence’. (The term actual goes back to a 1964 paper by Michael Beldoch and has more than a passing reference to Gardner’s Multiple Intelligences.) Suddenly, a new set of skills could be used to deliver another batch of ill-conceived courses, built on the back of no research whatsoever. But who needs research when you have a snappy course title?EI and performance At last we have some good research on the subject which shows that the basic concept is flawed, that having EI is less of an advantage than you think. Joseph et al. (2015) published a meta-analysis of 15 carefully selected studies, easily the best summary of the evidence so far. What they found was a weak correlation (0.29) with job performance. Note that 0.4 is often taken as a reasonable benchmark for evidence of a strong correlation. To put this into plain English, it means that EI has a predictive power on performance of only 8.4%.  Put another way, if you’re spending a lot of training effort and dollars on this, it’s largely wasted. The clever thing about the Joseph paper was their careful focus on actual job performance, as opposed to academic tests and assessments. They cut out the crap, giving it real evidential punch.EI is a bait and switchWhat became obvious as they looked at the training and tools, is that there was a bait and switch going on. EI was not a thing-in-itself but an amalgam of other things, especially good-old personality measures. When they unpacked six EI tests, they found that many of the measures were actually personality measures, such as conscientiousness, industriousness, self-control. These had been stolen from other personality tests. So, they did a clever thing and ran the analysis again, this time with controls for established, personality measures. This is where things got really interesting. The correlation between EI and job performance dropped to a shocking -0.2.Weasel word ‘emotional’Like many fads in HR, an intuitive error lies at the heart of the fad. It just seems intuitively true that people with emotional sensibility should be better performers but a moment’s thought and you realize that many forms of performance may rely on many other cognitive traits and competences. In our therapeutic age, it is all too easy to attribute positive qualities to the word ‘emotional’ without really examining what that means in practice. HR is a people profession, people who care. But when they bring their biases to bear on performance, as with many other fads, such as learning styles, Maslow, Myers-Briggs, NLP and mindfulness, emotion tends to trump reason. When it is examined in detail EI, like these other fads, fall apart. Weasel word ‘intelligence’I have written extensively about the danger in using the word ‘intelligence’, for example, in artificial intelligence. The danger with ‘emotional intelligence’ is that a dodgy adjective pushes forward an even dodgier noun. Give emotion the status of ‘intelligence’ and you give it a false sense of its own importance. Is it a fixed trait, stable over time, can it be taught and learned? Eysenck, the doyen of intelligence theorists, dismissed Goldman’s definition of ‘intelligence’ and thought his claims were unsubstantiated. In truth the use of the word is misleading.Bogus testsWorse still, EI has some tests that are shockingly awful. Tests often lie at the heart of these fads, as they can be sold, practitioners trained and the whole thing turned into a pyramid selling, Ponzi scheme.  Pra[...]



Top 20 myths in education and training

2017-04-08T11:20:18.768+00:00

Let's keep this simple.... click on each title to get full critique...1. LeadershipWe’re all leaders now, rendering the word meaningless.2. Emotional IntelligenceDodgy adjective pushing an even dodgier noun.3. Learning stylesAn intuition gone bad.4. Myers-BriggsShoddy and unprofessional judgements on others.5. Learning objectivesBore learners senseless at the start of every course. 6. MicrolearningAnother unnecessary concept.7. Maslow's hierarchyColoured slide looks good in Powerpoint.8. DiversityInconvenient truths (research) show it's wrong-headed.9. NLPNo Longer Plausible: training’s shameful, fraudulent cult?10. MindfulnessYet another mindless fad.11. Social constructivismInefficient, inhibiting and harmful fiction.12. 21st C skillsThey're so last century!13. ValuesDon't be a dick! - the rest is hubris. 14. PiagetGot nothing right and poor scientist.15. KirkpatrickOld theory, no longer relevant.16. Mitra's Hole-in-the-wallNot what it seems!17. Mitra's SOLE10 reasons why it is wrong.18. Ken RobinsonCreative with the truth.19. GamificationPlaying Pavlov with learners?20. Data AnalyticsDo bears shit in the woods?[...]



Why AI needs to drop the word ‘intelligence’

2016-12-16T17:53:20.833+00:00

The Turing test arrived in the brilliant Computing Machinery & Intelligence(1950), along with its nine defences, still an astounding paper that sets the bar on whether machines can think and be intelligent. But it’s unfortunate that the title includes the word ‘intelligence’ , as it is never mentioned in this sense in the paper itself. It is also unfortunate that the phrase AI (Artificial Intelligence) invented by John McCarthy in 1956 (the year of my birth), at Dartmouth (where I studied), has become a misleading distraction.Binet, who was responsible for inventing the IQ (intelligence quotient) test, warned against it being seen as a sound measure for individual intelligence or that it should be seen as ‘fixed’. His warnings were not heeded as education itself became fixated with the search and definition of a single measure of intelligence – IQ. The main protagonist being Eysenck and it led to fraudulent policies, such as the 11+ in the UK, promoted on the back of fraudulent research by Cyril Burt. Stephen Jay Gould’s 1981 book The Mismeasure of Man is only one of many that have criticised IQ research as narrow, subject to reification (turns abstract concepts into concrete realities) and linear ranking, when cognition is, in fact, a complex phenomenon. IQ research has also been criticised for repeatedly confusing correlation with cause, not only in heritability, where it is difficult to untangle nature from nurture, but also when comparing scores in tests with future achievement. Class, culture and gender may also play a role and the tests are not adjusted for these variables. The focus on IQ, a search for a single, unitary measure of the mind, is now seen by many as narrow and misleading. Most modern theories of mind have moved on to more sophisticated views of the mind as with different but interrelated cognitive abilities. Gardener tried to widen its definition into Multiple Intelligences (1983) but this is weak science and lacks any real vigour. It still suffers from a form of academic essentialism. More importantly, it distorts the filed of what is known as Artificial Intelligence.Drop word ‘intelligence’We would do well to abandon the word ‘intelligence’, as it carries with it so much bad theory and practice. Indeed AI has, in my view, already transcended the term, as it gained competences across a much wider sets of competences (previously intelligences), such as perception, translation, search, natural language processing, speech, sentiment analysis, memory, retrieval and other many other domains.Machine learningTuring interestingly anticipated machine learning in AI, seeing the computer as something that could be taught like a child. This complicated the use of the word ‘intelligence’ further, as machines in this sense operate dynamically in their environments, growing and gaining in competence. Machine learning has led to successes all sorts of domains beyond the traditional field of IQ and human ‘intelligences’. In many ways it is showing us the way, going back to a wider set of  competences that includes both ‘knowing that’ (cogntitive) and ‘knowing how’ (robotics) to do things. This was seen by Turing as a real possibility and it frees us from the fixed notion of intelligence that got so locked down into human genetics and capabilities.Human-all-too-humanOther formulations of capabilities may be found if we do not focus on the anthropomorphic view of intelligence and learning but rather competences. The word ‘intelligence’ has too much human import and baggage. It makes man the measure of all things, whereas, it is clear that computer power has alr[...]



Brexit & Higher ED – an opportunity - don’t panic life goes on

2017-01-07T12:50:35.731+00:00

Despite the excellent Chair's instruction, at Online Educa in Berlin, NOT to replay the debate an express personal opinions,we had a caricature and patronising view of 17.4 million Brexit voters from the first two speakers who ignored his appeal. They both reran the rhetoric.The former Head of HE for the British Council laid into Brexiters “All the correct people said it was dangerous… obvious lies" Then a French woman, who works for a French Minsitry, and had a thin grasp of why Brexit has happened, literally showed us images of the Brexit bus and a cartoon accusing Brexiters of being stupid "They all voted on what they saw on the side of a bus" followed by a patronizing cartoon again accusing Brexit voters of being idiots. On and on the patronising slides went - basically 'Woe is me - I'm smart, 17.4 million are stupid', not realizing that the stupidity lay on their own naïve views. Thankfully, Professor Paul Bacsich, who knows a thing or two about  such things, came to the rescue, as the last speaker, with some acute observations on Brexit and Higher Education. He was neither pessimistic not optimistic - just realistic. This made a welcome change from the mixture of arrogance and self-pity that seemed to hang over the debate up until this point. ResearchUnlike other sectors, HE has already had several Brexit-related concessions from the British Government. One is on Higher Education research, where the Government has agreed to underwrite Horizon funding. In early August the Chancellor Philip Hammond promised, to the universities participating in Horizon 2020, that the Treasury will underwrite the payments, even when specific projects continue beyond the UK’s departure from the EU. This was, in my view, quite generous, as ithe overvaluation of forced collaboration was often a recipe for 2nd and 3rd rate research. In practice I feel that research comes from all sorts of sources and direct funding by the UK Government may result in more focused, high quality research than the fixed EU structures offer.Student supportThere was a second promise in October, that European Union students, applying for university places in the 2017 to 2018 academic year, will still have access to student funding support. The UK government has also just announced that the country’s research councils will continue to fund postgraduate students from the European Union whose courses start in the next academic year. There is a utopian view that there is some sort of equitable arrangement across Europe for students moving from one country to another. In practice, it is an unholy mess. With Brexit, it will simply be slightly less messy. The Scots, as Paul said, are likely to go it alone, with some supermarket offer to EU students, which is all about market share.FeesEnglish fees are the highest in Europe, and Universities basically charge what they want, within a cap. This has nothing to do with the EU, as education is a devolved issue in both the UK and EU. The fundamental problem is raising fees and costs, which Universities are doing EU or no EU. As Paul says, the real issue here is the rising costs, which needs to be addressed. The justification of very high fees for international students is not at all clear, in terms of pedagogy and services that they get.Brexit may be the jolt to the system we need to address this problem. There is also the serious issue of EU students having racked up record loan debts of £1.3 billion, a 36 per cent increase from a year earlier. About 11 per cent, or 8,600 former students from the EU have failed to repay their loans after graduation. There is no effective polic[...]



Obama’s last act – recommendations on AI in learning

2016-11-29T11:16:27.494+00:00

Two readable government reports have been published, one from the US, the other from the UK. In one of Obama’s last acts in October, the Whitehouse published an excellent overview of AI 'Preparting for the Future of Artificial Intelligence', pointing to the many ways AI can, will and should benefit the US economy. What really caught my eye was the case study and two recommendations around learning.Expert to noviceThe US Navy used an AI tutoring system to capture ‘expertise’ that traditionally took 7-10 years to complete, then trained AI software to be an intelligent tutor in a 16 week intensive course, as a one-to-one expert. The results were impressive as the AI-tutored students “frequently outperform Navy experts with 7-10 years of experience in both written tests of knowledge and real-world problem solving…. by a wide margin”. This, and other findings led the report to make two recommendations around the use of AI in learning.Recommendation 3“The Federal Government should explore ways to improve the capacity of key agencies to apply AI to their missions… to support R&D to determine whether AI and other technologies could significantlyimprove student learning outcomes."Recommendation 4" …develop a community of practice for AI practitioners across government. Agencies should work together to develop and share standards and best practices around the use of AI in government operations. Agencies should ensure that Federal employee training programs include relevant AI opportunities."UK Government reportIn November, the UK Government published another excellent and readable report, with the title ‘Artificial intelligence: opportunities and implications for the future of decision making’. It is as good an introduction to the main concepts of machine learning (supervised and unsupervised) and deep learning, as I’ve read. Although it falls short on the concrete recommendations I found in the US report. ConclusionIt’s good to see Governments wake up to what is actually happening in AI. This is not a future tense issue, it is past tense. AI is here in Google, social media, Netflix, online dating, finance, fraud detection, spam detection, sports, healthcare and now education. It is easily the most important new trend in IT as every major tech company in the world is investing in AI and making acquisitions. My own view is that it will also disrupt education and training with products such as WildFire that produces online learning on one click of a button, and CogBooks with its adaptive learning platform. Let's not be left behind, as we so often are when it comes to the use of smart technology to do smart things.[...]



What's best in online learning design - 1st or 3rd person?

2016-11-22T22:05:45.335+00:00

Richard Mayer has over 500 publications to his name and does something extraordinary. He tells us how to design online learning. What makes his work so brilliant, and useful, is that some of his findings are counterintuitive. Take this brilliant example.First or third person?When showing a series of graphics, photographs, animations or on video, should you show procedures from the first (learner) or third (teacher’s) perspective?You should show from the first person perspective to achieve higher retention and transfer. This is not surprising, when you think about what you are trying to achieve – cognitive change in the learner’s brain. First person is exactly how the actions will be performed in real life, so that viewpoint is more congruent with the eventual outcome.Wrong practiceYet most photographers, animators, graphic artists and video Directors’ are likely to create or shoot a third person perspective. There is the additional advantage in some tasks of a more open, less occluded view, as the hands and fingers are not covering the action. It may be trickier to shoot, as the instructor lies between the camera and the action, but it is right.VR is first personThis finding also lends weight to the use of first-person VR and AR in learning, where the learner is the viewer/director. VR gives the added benefits of total immersion, full attention, emotional impact, context and actual doing, which all add up to increased retention and transfer.Conclusion Online design needs to pay more attention to findings such as this. Mayer has been publishing this stuff for decades, yet many are unaware of his work, which shows time and time again, that 'less is more'. Yet online learning design seems to have drifted towards  ‘more is more’. For hundreds of other tips on online learning design from Mayer and other researched sources click here.[...]



Weapons of 'Math' Destruction - sexed up dossier on AI?

2016-11-21T17:51:32.988+00:00

Unfortunate title, as O’Neil’s supposed WMDs are as bad as Saddam Hussein’s mythical WMDs, the evidence similarly weak, sexed up and cherry picked. This is the go-to book for those who want to stick it to AI by reading a pot-boiler. But rather than taking an honest look at the subject, O’Neil takes the ‘Weapons of Math Destruction’ line far too literally, and unwittingly re-uses a term that has come to mean exaggeration and untruths. The book has some good arguments and passages but the search for truth is lost as she tries too hard to be contrarian.Bad examplesThe first example borders on the bizarre. It concerns a teacher who is supposedly sacked because an algorithm said she should be sacked. Yet the true cause, as revealed by O’Neil, are other teachers who have cheated on behalf of their students in tests. Interestingly, they were caught through statistical checking, as too many erasures were found on the test sheets. That’s more man than machine.The second is even worse. Nobody really thinks that US College Rankings are algorithmic in any serious sense. The ranking models are quite simply statistically wrong. The problem is not the existence of fictional WMDs but poor schoolboy errors in the basic maths. It’s a straw man, as they use subjective surveys and proxies and everybody knows they are gamed. Malcolm Gladwell did a much better job in exposing them as self-fulfilling exercises in marketing. In fact. most of the problems uncovered in the book, if one does a deeper analysis, are human. The main problem is that these case studies are very complex.Take PredPol, the predictive policing software. Sure it has its glitches but the advantages vastly outweigh the disadvantages and the system and its use evolve over time to eliminate the problems. I could go on but the main problem with the book is this one-sidedness. Most technology has a downside. We drive cars, despite the fact that well over a million people die gruesome and painful deaths every year from in car accidents. Rather than tease out the complexity, ecven comparing uosides with downsides, we are given over-simplifications. The proposition that all algorithms are biased is as foolish as all algorithms are free from bias. This is a complex area that needs careful thought and the real truth lies, as usual, somewhere in-between. Technology often has this cost-benefit feature. To focus on just one side is quite simply a mathematical distortion, which is what O’Neil does in many of her cases.The chapter headings are also a dead giveaway - Bomb Parts, Shell Shocked, Arms Race, Civilian Casualties, Ineligible to serve, Sweating Bullets, Collateral Damage, No Safe Zone, The Targeted Civilian and Propaganda Machine. This is not 9/11 and the language of WMDs is ridiculously hyperbolic- verging on propaganda itself.At times O’Neil makes good points on ‘data' – small data sets, subjective survey data and proxies – but this is nothing new and features in any 101 stats course. The mistake is to pin the bad data problem on algorithms and AI – that’s a misattribution. Time and time again we get straw men in online advertising, personality tests, credit scoring, recruitment, insurance, social media. Sure problems exist but posing marginal errors as a global threat is a tactic that may sell books but is hardly objective. In this sense, O'Neil plays the very game she professes to despise - bias and exaggeration.The final chapter is where it all goes a bit weird, with the laughable Hippocratic Oath. Here’s the first line in her imagin[...]



Lo and behold - Herzog's meditation on the internet

2016-11-14T12:24:32.925+00:00

‘Lo and Behold’ by the always strange Werner Herzog, is a meditation, nay a ten-chapter chronicle, on the internet  - and it’s surprisingly good. I say that because he’s fundamentally a romantic filmmaker. As in Cave of Forgotten Dreams, he can drift into spiritual nonsense and does, in the middle of this film, feature some predictable Herzogian nutters. But on the whole he let’s the people he talks to do all the work, and they’re mostly smart and reflective – Kleinrock, Kahn, Ted Nelson, Zittrain, Mitnick, Rajkumar, Thrun, Musk. Some big minds here with some great insights. Kleinrock and Kahn were there at the start, with Ted Nelson (who thinks we got the web all wrong). The rest tease out some facinating thoughts on what they regard as perhaps the gretest and most significant invention of our species.It’s a journey from the first meaningful message on the internet (Lo...), through its evolution, enormity and emergent qualities. But its major theme is AI. Most science fiction missed the internet (it was all flying cars) and the internet, with networked intelligence, is posing some very profound issues about what it is to be human, smart but not human, and what the future may bring. It’s vexing me these problems, as I’m spending a lot of time talking about, writing and working in this area – believe me it’s both exhilarating and terrifying.Thrun talks about MOOCs and the fact that his Stanford students didn’t get into top 400 who finished the course. He sees education as one of the great gains from the internet. This was the man behind Google’s self-driving cars that are linked to the internet and learn, collectively, always getting better, passing this on to all unborn cars. See what I mean - this is smart thinking. There's robot soccer teams that work together as a team and learn to get better on tactics. The creator thinks robots will eventually beat humans in a soccer World Cup Final. The chimp robot that imagines and models its actions before performing them, giving a hint at consciousness.  The emergent qualities of the internet – and it’s unpredictability. He also dips down into the dark side - with hacking and the horrific possibility of the internet being wiped out by a solar flare. This movie is well worth a watch. Sebastian Thrun, who has an EdTech company worth a billion, says something fascinating towards the end, “Almost everything we do will be done by machines… almost everything we do, will be done better by machines… because machines can learn faster than people can learn.” That is an astonishing statement.[...]



20 ways AI will affect managers – why are HR & L&D so out of touch?

2016-11-08T00:01:24.884+00:00

Around the world, especially in the richer nations, the truth is dawning that we need less managers. The middle-class is now waking up to the fact that they are the new working-class, as their roles diminish. Technology destroyed employment in the fields and people fled to the factories. Technology destroyed employment in the factories and people fled to offices and management, Technology, especially AI, will destroy offices and management but there’s nowhere else to go. That is the great threat of our age.But HR and L&D are eerily silent. How many Leadership course are being delivered today without any mention of AI, despite the fact that we know that its impact has been and will continue to be huge? How many HR folk have the remotest idea of what is happening here? To sit back and watch this happen, without serious debate and preparation, is bizarre. A more immediate debate is how to cope with AI in the short to medium term. This report from the Harvard Business Review is a good start on ‘How Artificial Intelligence Will Redefine Management’.Management & AIThe survey was substantial covering 1,770 managers (14 countries), interviewing 37 executives in charge of digital transformation at their organizations. Findings were reduced to five recommendations:Practice 1: Leave Administration to AIPractice 2: Focus on Judgment WorkPractice 3: Treat Intelligent Machines as “Colleagues”Practice 4: Work Like a DesignerPractice 5: Develop Social Skills and NetworksBut surveys tend not to spot the really disruptive stuff, and as the respondents are the vey people that are likely to be affected by that disruption, they tend to be conservative. So first, a quick expansion of these five propositions but I’ll add the fifteen more I think they missed.Practice 1: Leave Administration to AIA big one. As the survey showed that managers spend well over half their time on administration. This is precisely the sort of work that has already been automated to a degree by tools and will be increasingly done by AI driven systems. Monitoring, scheduling, reporting are all areas that are likely to be more automated through AI. The room for efficiencies here are enormous as organisations are essentially paying top dollar for relatively routine and banal work.Practice 2: Focus on Judgment WorkThose surveyed seemed to think that this was an area that would remain relatively untouched by AI. I’m not so sure. The sort of expertise that leads to good judgment may also be under threat, as AI does the data gathering, analysis and production of insights. Recommendations (judgements) may well be AI driven. As Susskind in The Future of the Professions is packed with examples of judgment making AI, across a range of professions. There will be a pendulum swing as AI creeps forward doing more and more, eating into the judgment sphere.Practice 3: Treat Intelligent Machines as “Colleagues”The report also shies away from the obvious conclusion that we will need fewer managers. They found that 78% of the surveyed managers believe that they will trust the advice of intelligent systems in making business decisions in the future. This raises the obvious question of how many managers we need to simply confirm these decisions. The anthropomorphic idea of ‘colleagues’ is also a bit odd. I don’t see Excel as a colleague. I see it as a tool. I suspect there’s still the idea that AI always manifests itself as a robot, at work here. It’s not so much managers treating AI as coll[...]



AI tutors have real impact in skills training

2016-11-01T23:45:16.660+00:00

In the just published, and excellent, ‘PREPARING FOR THE FUTUREOF ARTIFICIAL INTELLIGENCE OF ARTIFICIAL INTELLIGENCE’ by the White House, AI in education gets a good airing.Darpa has an “Education Dominance” program that focuses on accelerating training from years to months. In shifting trainees from novice to expert by modelling that shift and allowing AI to use those models, they have had success in one domain IT Administrators. The AI tutor learns from data provided by real experts and delivers one-to-one expertise in an intensive 16-week course. We are now in a position, I feel, especially with content creation, to capture expertise and deliver it on scale. This is something I’ve been doing with WildFire in large corporates.The AI tutored learners “frequently outperform Navy experts with 7-10 years of experience in both written tests of knowledge and real-world problem solving” what the actual evaluation said was by “a wide margin”. Seems good to me. The important point is that this general approach can be used for many other skills. Why we are not taking this approach with the massive push on apprenticeships in the UK is beyond me. My one effort at getting AI into a major skills-based organisation was met with stubborn old-school resistance.We need radical approaches like this if we are to train people quickly for jobs that emerge in the new economy. It is no longer acceptable to regard ‘years’ as the unit of currency in education and training.Education a slow learnerThis is bold and as the separate Department of Education report referenced in the White House report says, “Performance improvement in education has lagged behind other sectors…. because of limited R&D investment, the benefits of the IT revolution have largely passed education by.”It was good to see that the report had two major recommendations on the use of AI in learning.RecommendationsRecommendation 3: The Federal Government should explore ways to improve the capacity of key agencies to apply AI to their missions. For example, Federal agencies should explore the potential to create DARPA-like organizations to support high-risk, high-reward AI research and its application, much as the Department of Education has done through its proposal to create an “ARPA-ED,” to support R&D to determine whether AI and other technologies could significantly improve student learning outcomes. Recommendation 4: The NSTC MLAI subcommittee should develop a community of practice for AI practitioners across government. Agencies should work together to develop and share standards and best practices around the use of AI in government operations. Agencies should ensure that Federal employee training programs include relevant AI opportunities. AI is not the only technology that should be considered for skills training. VR is another. With its absolute attention, emotional impact, ability to learn by doing, context and transfer, it’s a natural candidate for practical skills, as is AR.ConclusionAs the AI revolution bites, it will (already has) have a massive effect on employment. The need to use AI in skills training is a must. If we don’t start now we will have missed a huge opportunity. Let’s use AI for human good, not be a victim of AIs success.[...]



AI will decimate jobs – all the more reason to use it in education

2016-11-03T14:06:36.184+00:00

As the evidence builds, AI is seen as a serious threat to millions of jobs. My own view is that this has already happened. If we look at the top companies by market cap in 2006, then 2016, we see that the tech giants predominate. But they employ far fewer people, and at the same time, destroy jobs elsewhere. If they don’t destroy jobs, they almost certainly create underemployment. Some argue that the hollowing out of the middle-class has already taken place, via automation, hence the current political disillusionment and Trump.Basic Minimum WageIf, as most agree, the net result is far fewer jobs, or at least less time at work, we need to turn our attention to the political consequences. This is primarily a moral and political issue and the Basic Minimum Wage has been touted as the obvious answer. But I’m not convinced. The desire to get on in life and be productive seems like a too powerful a cultural force to give way to a supported life of leisure through taxation and pubic expenditure, even if it were possible. A more likely scenario is one where our needs for growth and development are also satisfied.Basic Education RightsWhat is more likely, is the need to educate and train people to do many different things in this new AI-infused, smart economy. This is hardly likely if the costs are at their present levels, and rising. This is not about the ever-expanding campus model. If we keep to the idea of education as being on campus, in institutions, always delivered by expensive ‘teachers’, then there will be an impasse. This quite simply cannot happen.AI creates as it destroysShiva destroys as she creates, so it could be with AI. AI also needs to be applied to education to bring the same productivity efficiencies that it brings elsewhere. There is nothing unique about teaching and learning that make them immune from AI. Teaching is a means to an end – learning. If that can be done through AI, then we should grasp the opportunity. If we are facing a future where many people do not have a job or have more leisure time or need to be trained in newer skills, AI may then be a partial cure for the very ill it creates. If applied to education, teaching and learning, it may right the lifeboat before it gets a chance to overturn.Forward to the pastIt seems likely that we face a return to a human condition that last existed before the industrial revolution, when machines automated and stimulated the global economy, also before the agrarian revolution when machines destroyed work on the land. It is a return to an age when people did not work in factories or fields but prepared the land, sowed seeds, waited and harvested. Work was far more episodic in those times, with long periods, not of leisure, as a long winter with meagre food should not be romanticised. But there was certainly more time.Scalable education through AIIn our technological age, when the communications technology we have both entertains and educates, it may also provide opportunities, on scale, to educate us for this new future. It could, just as it has in robotics, reduce the costs of education so much that we can all, including those in the developing world, benefit from its scalable dividend.It is right to worry about the future, as recent events portent great dangers. Trump has shown that people are angry at losing the gainful employment and lives they once had. In supposedly prosperous Europe, the southern countries have massive adult[...]



10 reasons why 'Leaderboards suck in learning

2016-11-01T23:48:18.204+00:00

Gamification has been around for years (over 30) but I fear it has recently descended into a superficial, Pavlovian box of tricks, overhyped by vendors keen to prove how ‘cool’ and ‘contemporary’ they are . In practice, much of it is a bit hokey, and insensitive to learning theory and, I fear, the whole point of the bigger game – learning.Don’t get me wrong, I like gamification, in the sense of games’ techniques grounded in good learning theory. These include: levels of attainment, catastrophic failure and repeats, effortful learning, mini-sims and deliberate practice. What I witness, however, is the superficial buzz of the arcade; leaderboards, points, board games, sound effects, noise and animated piffle. Gamification only works when the cognitive load, that gamification always entails, results in greater learning outcomes. It you go for approaches that are superficially buzzy, yet work against learning, in that they draw the learner away from actual learning, realism, context and transfer, you will have failed.  You may even have demotivated, rather than motivated most of your learners. This is especially true of the one gamification technique I do not like, and do not recommend - the leaderboard. The word betrays itself, embedding as it does the the word ‘Leader’, but there are many other problems:1. Decoupled from standardsLeaderboards are inappropriate for personal development. Learning is not a race, not the survival of the fittest. Any league table that ranks people into a linear list decouples the learning from real measures of competence. You can list all you want but that list is not anchored in competences just because it is ranked. Let’s say you have 100 learners in five leagues of 20. The entire 100 may be failing but you can still sort them into a table.2. Leaders problemOne of the problems with leaderboards is something I experienced on a project for a major bank some years ago, where we set up product quizzes on kiosks (like fruit machines) around the country. The leaderboard was quickly dominated by a few fanatics, one site in particular, who would do anything, including cheat, to stay at the top. Leaderboard encourage people to cheat. These ‘leaders’ became a problem. We had to take it down. This is something I’ve seen elsewhere.3. Zero sum gameIn general, social challenges, where individual learners are exposed to group comparisons and scores, can demotivate those who lag behind. The downside is obvious, seeing people move ahead of you makes you think you’re not good enough. For every person that moves up a place there’s someone who moves down so the motivational spur may, in total, be zero. For every leader there’s a laggard. In fact, it may be worse than this, as even those above the median point may feel disappointed. In fact the leaderboard technique tends to ignore the majority in favour of the few at the top.4. Self determination theoryFar better to let the learner use themselves as the benchmark, set self-targets and self-goals, then encourage progress in that way. This avoids the danger of disaffecting entire groups who do not see their learning in terms of games and learning. Gamification is often best when it is almost invisible and embedded in the learning strategy. This ‘self determination’ theory is well researched and relies more on encouraging intrinsic, as opposed to extrinsic motivation.5. Short-term [...]