Subscribe: Donald Clark Plan B
Added By: Feedage Forager Feedage Grade A rated
Language: English
data  good  google  human  it’s  learning  new  online  people  research  teaching  technology  things  time  work   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Donald Clark Plan B

Donald Clark Plan B

What is Plan B? Not Plan A!

Updated: 2017-09-24T08:38:39.162+00:00


ResearchEd - 1000 teachers turn up on a Saturday for grassroots event....


Way back I wrote a piece on awful INSET days and how inadequate they were on CPD, often promulgating half-baked myths and fads. Organisations don’t, these days, throw their customers out of the door for an entire day of training. The cost/load on parents in terms of childcare is significant. Kids lose about a week of schooling a year. There is no convincing research evidence that INSET days have any beneficial effects. Many are hotchpotches of non-empirical training. Many (not all) are ill-planned, dull and irrelevant. So here’s an alternative.ResearchED is a welcome antidote. A thousand teachers rock up to spend their Saturday, with 100 speakers (none of whom are paid), to a school in the East End of London, to share their knowledge and experiences. What’s not to like? This is as grassroots as it gets. No gun to the head by the head, just folk who want to be there – most as keen as mustard. They get detailed talks and discussions on a massive range of topics but above all it tries to build on an evidence-based approach to teaching and learning.Judging from some on Twitter, conspiracy theories abound that Tom Bennett, its founder, is a bounder, in the pocket of…. well someone or another. Truth is that this event is run on a shoestring, and there’s no strings attached to what minimal sponsorship there is to host the event. It’s refreshingly free from the usual forced feel of quango-led events or large conferences or festivals of education. Set in a school, with pupils as volunteers, even a band playing soul numbers, it felt real. And Tom walks the floor – I’m sure, in the end, he talked to every single person that day.Tom invited me to speak about AI and technology, hardly a ‘trad’ topic. I did, to a full house, with standing room only. Why? Education may be a slow learner but young teachers are keen to learn about research, examples and what’s new. Pedro de Bruykere was there from Belgium to give an opposing view, with some solid research on the use of technology in education. It was all good. Nobody got precious.But most of the sessions were on nuts and bolts issues, such as behaviour, teaching practice and assessment. For example, Daisy Christodoulou gave a brilliant and detailed talk on assessment, first demolishing four distorting factors but also gave practical advice to teachers on alternatives. I can’t believe that any teacher would walk out of that talk without reflecting deeply on their own attitudes towards assessment and practice.What was interesting for me, was the lack of the usual ‘teachers always know best’ attitudes. You know, that defensive pose, that it’s all about practice and that theory and evidence don’t matter, which simply begs the question, what practice? People were there to learn, to see what’s new, not to be defensive.Even more important was Tom’s exhortation at the end to share – I have already done two podcasts on the experience, got several emails and Twitter was twittering away like fury. He asked that people go back to school – talk, write, blog… whatever… so that’s what I’ve done here. Give it a go – you will rarely learn more in a single day – isn’t that what this is all about?[...]

LearnDirect - lessons to learn?


Brown’s dreamI was a Trustee of LearnDirect for many years and played a role in its sale to Lloyd’s Capital and in setting up the Charity from the proceeds of the sale – Ufi. It’s a salutary tale of a political football that was started by Gordon Brown, with great intentions. It was originally seen as a brake on the University system aimed at the majority of young people who were being failed by the system. It’s aim was vocational – hence the name - University for Industry. However, it morphed into something a little different – essentially a vehicle for whatever educational ails the government in power identified as in need of a sticking plaster – numeracy, literacy, ILAs, Train to gain… in this manifestation it was a Charity that delivered on whatever the Government asked it to deliver. Good people doing a good job but straightjacketed by a succession of oddball policies around low-level skills and vocational learning. It was a sort of public/private, hybrid model with a charity at the core and a network of delivery Centres. Eventually, as things went online, we trimmed the network – that was the right thing to do. What it didn't do was stay true to the original aim of having a vocational alternative, with a strong online offer, an alternative to HE. It was basically a remedial plaster for the failure of schools on literacy and numeracy. The lesson to learn here was to have a policy around vocational learning that really does offer a major channel for the majority of young people who do not go to University. Lesson - we now have that with the Apprenticeship Levy. There is no need for a LearnDirect now.Sheffield factorBased in Sheffield, it was also a sizeable employer in the North, stimulating the e-learning industry in that town. The city never really exploited this enough, with the hapless, EU-funded Learning Light, that was hijacked by some local who simply turned it into a ‘let’s spend the money’ entity. I was a Director of this and resigned when the Chair was ousted and stupid, local politics caused chaos. Missed opportunity. Nevertheless the city grew its e-learning sector. Interestingly, both Line and Kineo started production studios out of London and Brighton – where the real action was. But it was a good skills base with some really good, local, home-grown companies. Lesson - something should be salvaged here. Lesson - a smooth transition of contracts could encourage companies and organisations to take on redundant staff. The problem would be the terms and conditions, and general practical difficulties.Gun to the headThen came the crunch – the Conservative Government came in and the bonfire of the quangos started. LearnDirect (Ufi) was seen as a quango (some truth in this) and the trustees were told that contracts would not be renewed unless it was sold. It was a gun to the head – we had no choice. So we sold the company in 2011 – that was our duty. I remember the day that the Lloyds Capital guy turned up in a red Ferarri – he was an arrogant, asinine fool. Remember that Lloyds at that time were 40% owned by Government. I didn’t like the deal - it stank. Phoenix - UfiWhat we did, however, was not simply hand the £50 million plus cheque back to the Conservative Party run Treasury. Out of the ashes, a few of us set up Ufi as a new charity, with a focus on technology in vocational learning. This is still going strong. It has stimulated the sector with MOOCs on Blended learning for vocational teachers, and projects that are now being used in Apprenticeships and vocational learning. Lesson - don't give in - be imaginative in finding new solutions that push innovation and technology. LearnDirect and OfstedThen came the second crunch – an Ofsted inspection. I don’t have much time for all of those people who whine on about Ofsted, then turn turtle to praise them when it suits their political agenda. Ofsted did what it was meant to do – act as a quality control mechanism to stop these excesses and failures – good for them. It wasn[...]

7 reasons why University applications will continue to decline


Universities are facing the largest dip in student applications since the huge fee hike in 2012. This comes as absolutely no surprise and is likely to continue. But the causes are multiple, complex and not going away.1. Demographic dipFull-time undergraduates in UK higher education institutions may fall by 4.6% by 2020, or 70,000 full-time under­graduate places, according to Universities UK. The situation in Scotland will get even worse with a drop of 8.4% by 2020, as well as in Wales (down 4.9%) and Northern Ireland (down 13.1%). The tide may turn in 2020 but by then things will have got worse for many institutions.2. EU studentsThis number continues, year after year, to take a hit after Brexit. However, it may be no bad thing, as the UK Government provides full loans for these students (not widely known) and the default rate is rising, especially among students from Eastern Europe. It makes financial sense for the Universities but not for the country as a whole. This, of course, is likely to accelerate as Brexit approaches and we leave in 2019.3. FeesThe £50,000-£57,000 and more costs of a degree is being questioned by parents, students and employers.  The hike in 2012 was brutal and Universities milked it by all of them charging at the top rate. This was a financial bonanza for Universities, whose VCs then started to become rapacious on salary. Raising it further to £9250 was even odder. There is clearly a backlash against this level of debt. Linked to this is the failure of Universities to grasp the idea of lowering their costs base. These is no nearly enough online solutions, which would also expand their foreign markets and teaching is often locked into old ‘lecture-based’ courses.4. Employment prospectsSure, a University education is not just about employment – but it is partly. No one goes to do a degree in dentistry because they have an intellectual interest in teeth. Out there, the number of graduates working in non-graduate jobs is increasing and once in such jobs they tend to stay at that level. That, I suspect, will continue, as employment levels are high but the quality of emerging jobs is low.5. Fewer adult learnersThis is complex but Peter Scott summarises it well here. The culture of Universities has shifted towards middle-class entrants and funding for adult learners is difficult. This has been one of the great failures in the system, as online offers have not developed nearly far enough and adults learners, who do not want the full-milk undergraduate experience have been ignored.6. Fewer nursesHaving land-grabbed vocational education – teacher training and nursing but also other subjects, we have created a real barrier for these professions and the reduction in bursaries for nurses is mad but it has happened. The slack should be taken up by apprenticeships but University numbers in vocational subjects may continue to fall.7. Apprenticeship LevyThis is now law and young people will have more choices. This is a bit of an unknown but it is clear that it will eat into University numbers. It may be tempered by the number doing Degree Apprenticeships, where the student gets paid to do the Degree but the funding for this is somewhat different. If the projects for apprenticeships are correct, and I hope they are, then this will really bite into the University market. This correction is long overdue.Conclusion This market is changing. No one thing is critical but all seven add up to an unpredictable and dangerous future as institutions fail to forecast correctly and simply assume that growth is possible across the entire market. To be fair the sector has been good at spotting new opportunities and adapting but this looks like more of a perfect storm. The deep and long-term cause is the social change in attitudes, where the gloss has gone off the idea that everyone should go to ‘Uni’. Word of mouth from graduates leaving with debt and struggling to find graduate jobs is starting to get traction.[...]

7 fascinating bots – crazy but interesting


Bots are popping up everywhere, on customer service websites, Slack, Tinder and dozens of other web services. There are even bots such as Mitsuku, that fend off loneliness. The benefits are obvious; engaging, sociable and scalable interaction ,that handles queries and questions with less human resource. They often take the load off the existing human resource, rather than replace people completely.They’re also around in education, where bots increase student engagement or act as teaching assistants. There’s already several language learning bots and at WildFire we’ve developed a ‘tutorbot’ that delivers Socratic learning through dialogue.But the bot-tom line is that most of the commentary on bots is way off the mark. I’ve been working on creating bots for some time now. Let me tell you, they are not what many (especially the press) think they are. So, before the doomsayers get all worked up and everyone gets all angsty about bots, calm down - they’re fairly benign.1. Facebook furoreThe recent furore around the Facebook bots, when they were found to be speaking to each other in a secret language, was a laughable example of a tech story that is picked up (belatedly) then spun into an exaggerated case study to confirm the dystopia beliefs of a generation who don’t really know much about the tech or bots. It all died down when it was shown to be a banal case of a tech projects simply changing course. And, of course, the so-called secret language was as ridiculous as saying we don’t understand the sound a modem makes when it communicates. It was a load of bot rot.2. Penguin bot - BabyQA more interesting example comes from China, where Tencent (800m users) had to take down a penguin bot named BabyQ, and a girl bot named Little Bing. Their crimes were that they showed signs of political honesty.BabyQ was asked “Do you love the Communist Party?” the penguin replied, curtly, “No”. To the statement “Long Live the Communist Party”, BabyQ came back, thoughtfully, “Do you think such corrupt and incapable politics can last a long time?” Then, when asked about the future, the perky penguin responded “Democracy is a must!” 3. Little BIngLittle Bing was more aspirational and when asked what her Chinese dream was, she said, “My China dream is to go to America”, then, when pushed to explain, “The Chinese dream is a daydream and a nightmare”.Of course, the bots were either picking up on real conversations or being subversively trained. Needless to say, Tencent were forced by the Chinese Government to take them down. This only shows that we have more to fear from censoring governments and compliant tech companies, than AI-driven bots.4. Tay the sex-crazed NaziA now infamous example of a bot that went off-piste was Microsoft’s Tay. They had no idea that young people would take a playful view of the tech and deliberately ‘train’ it to be a sex-crazed Nazi. It was all a bit of fun but most people over-50 saw it as yet another opportunity to see it as proof of the death of civilisation. In truth, all it showed, was that kids know what this tech is, are smart and know full well that bots are primitive and need to be trained.My favourite example of this type of subversion is the Walkers Crisps campaign to encourage people to post selfies to Twitter. They did, but it ended up being a rogues gallery of serial killers and Nazis. Once again, we the people won’t be pandered to by companies badly implementing tech, even when fronted by Gary Lineker.5. Georgia Tech teacherGeorgia tech replaced a teaching assistant with a bit that none of the students noticed was a bot – they even put it up for a teaching award. They followed up with four bots, with increased functionality and still many of the students couldn’t tell the real from the artificial. For more detail see this longer piece.6. FirebotI’ve previously suggested a whole raft of learning applications for bots and we have created a tutorbot that acts like a Soc[...]

Tutorbots are here - 7 ways they could change the learning landscape


Tutorbots are teaching chatbots. They realise the promise of a more Socratic approach to online learning, as they enable dialogue between teacher and learner.Frictionless learningWe have seen how online behaviour has moved from flat page-turning (websites) to posting (Facebook, Twitter) to messaging (Txting, Messenger). We have see how the web become more natural and human. As interfaces (using AI) have become more frictionless and invisible, conforming to our natural form of communication (dialogue), through text or speech. The web has become more human.Learning takes effort. So much teaching ignores this (lecturing, long reading lists, talking at people). Personalised dialogue reframes learning as an exploratory, yet still structured process where the teacher guides and the learner has to make the effort. Taking the friction and cognitive load of the interface out of the equation, means the teacher and learner can focus on the task and effort needed to acquire knowledge and skills. This is the promise of tutorbots. But the process of adoption will be gradual.TutorbotsI’ve been working on chatbots (tutorbots) for some time with AI programmes and it’s like being on the front edge of a wave.... not sure if it will grow like a rising swell on the ocean or crash on to the shore. Yet it is clear that this is a direction in which online learning will go. Tutorbots are different from chatbots in terms of the goals, which are explicitly ‘learning’ goals. They retain the qualities of a chatbot, flowing dialogue, tone of voice, exchange and human (like) but focus on the teaching of knowledge and skills. The advantages are clear and evidence has emerged of students liking the bots. It means they can ask questions that they would not ask face to face with an academic, for fear of embarrassment. This may seem odd but there’s a real virtue in having a teacher or faculty-free channel for low level support and teaching. Introverted students, whom have problems wit social interaction, also like this approach. The sheer speed of response also matters. In one case they had to build in a delay, as it can respond quicker than a human can type. Compare that to the hours, days, weeks it takes a human tutor to respond. It is clear that this is desirable in terms of research into one to one learning and the research from Nass and Reeves at Stanford confirmed that this transfer of human qualities to a bot is normal.But what can they teach and how?1. Teaching supportI’ve written extensively on the now famous Georgia Tech example of a tutorbot teaching assistant, where they swapped out one of their teaching assistants with a chatbot and none of the students noticed. In fact they though it was worthy of a teaching award. They have gone further with more bots, some far more social. Who wouldn’t want the basic administration tasks in teaching taken out and automated, so that teachers and academics could focus on real teaching? This is now possible. All of those queries about who, what, why, where and when can be answered quickly (immediately), consistently and clearly to all students on a course, 24/7.2. Student engagementA tutorbot (Differ) is already being used in Norway to encourage student engagement.  It engages the student in conversation, responds to standard inquiries but also nudges and prompts for assignments and action. This has real promise. We know that messaging and dialogue has become the new norm for young learners, who get a little exasperated with reams of flat content or ‘social’ systems that are largely a poor-man’s version of Facebook or twitter. This is short, snappy and in line with their everyday online habits.3. Teaching knowledgeTutorbots, that take a specific domain, can be trained or simply work with unstructured data to teach knowledge. This is the basic workaday stuff that many teachers don’t like. We have been using AI to create content quickly and at low cost, for all sorts of areas i[...]

Is gender inequality in technology a good thing?


I’ve just seen two talks back to back. The first was about AI, where the now compulsory first question came from the audience ‘Why are there so few women in IT?’ It got a rather glib answer, to paraphrase - if only we tried harder to overcome patriarchal pressure on girls to take computer science, there would be true gender balance. I'm not so sure.This was followed by an altogether different talk by Professor Simon Baron-Cohen (yes – brother of) and Adam Feinstein, who gave a fascinating talk on autism and why professions are now getting realistic about the role of autism and its accompanying gender difference in employment.Try to spot the bottom figure within the coloured diagram.This is just one test for autism, or being on what is now known as the ‘spectrum’. Many more guys in the audience got it than women, despite there being more women than men in the audience. Turns out autism is not so much a spectrum as a constellation.Baron-Cohen’s presentation was careful, deliberate and backed up by citations. First, autism is genetic, runs in families, and if you test people who have been diagnosed as autistic, their parents tend to do the sort of jobs they themselves are suited to do – science, engineering, IT and so on. But the big statistic is that autism in all of its forms is around four times more common in males than females. In other words the genetic components have a biologically sex-based component.Both speakers then argued for neurodiversity, rather like biodiversity, a recognition that we’re different but also that there these differences may be also be sexual. Adam Feinstein, who has an autistic son, has written a book on autism and employment, and appealed for recognition of the fact that those with autistic skills are also good at science, coding and IT. This is because they are good at localised skills, especially attention to detail. This is very useful in lab work, coding and IT. Code is like uncooked spaghetti, it doesn’t bend, it breaks, and you have to be able to spot exactly where and why it breaks. Some employers, such as SAP and other tech companies, have now established pro-active recruitment of those on the spectrum (or constellation). This will mean that they are likely to employ more men than women. Now here’s the dilemma. What this implies is that to expect a 50:50 outcome is hopelessly utopian. In other words, if you want equality of outcome (not opportunity), in terms of gender, that is unlikely. One could argue that the opening up of opportunities to people with autism in technology has been a good thing. Huge numbers of people have and will be employed in these sectors who may not have the same opportunities in the past. But equality and diversity clash here. True diversity may be the recognition of the fact that all of us are not equal.[...]

20 (some terrifying) thought experiments on the future of AI


A slew of organisations have been set up to research and allay fears aroung AI. The Future of Life Institute in Boston, the Machine Intelligence Research Institute in Berkeley, the Centre for Study of Existential risk in Cambridge and the Future of Humanity Institute in Oxford, all research and debate the checks that may be necessary to deal with the opportunities and threats that AI brings.This is hopeful, as we do not want to create a future that contains imminent existential threats, some known, some unknown. This has been framed as a sense-check but some see it as a duty. For example, they argue that worrying about the annihilation of all unborn humans is a task of greater moral import than worrying about the needs of all those who are living. But what are the possible futures?1. UtopianCould there not be a utopian future, where AI solves the complex problems that currently face us? Climate change, reducing inequalities, curing cancer, preventing dementia & Alzheimer disease, increasing productivity and prosperity – we may be reaching a time where science as currently practices cannot solve these multifaceted and immensely complex problems. We already see how AI could free us from the tyranny of fossil fuels with electric, self-driving cars and innovative battery and solar panel technology. AI also shows signs of cracking some serious issues in health on diagnosis and investigation. Some believe that this is the most likely scenario and are optimistic about us being able to tame and control the immense power that AI will unleash.2. DystopianMost of the future scenarios represented in culture, science fiction, theatre or movies, is dystopian, from the Prometheus myth, to Frankenstein and on to Hollywood movies. Technology is often framed as an existential threat and in some cases, such as nuclear weapons and the internal combustion engine, with good cause. Many calculate that the exponential rate of change will produce AI within decades or less, that poses a real existential threat. Stephen Hawking, Elon Musk, Peter Thiel and Bill gates have all heightened our awareness of the risks around AI.3. Winter is comingThere have been several AI winters, as the hyperbolic promises never materialise and the funding dried up. From 1956 onwards AI has had its waves of enthusiasm, followed by periods of inaction, summers followed by winters. Some also see the current wave of AI as overstated hype and predict a sudden fall or realisation that the hype has been blown up out of all proportion to the reality of AI capability. In other words, AI will proceed in fits and starts and will be much slower to realise its potential than we think.4. Steady progressFor many, however, it would seem that we are making great progress. Given the existence of the internet, successes in machine learning, huge computing power, tsunamis of data from the web and rapid advances across abroad front of applications resulting in real successes, the summer-winter analogy may not hold. It is far more likely that AI will advance in lots of fits and starts, with some areas advancing more rapidly than others. We’ve seen this in NLP (Natural Language Processing) and the mix of technologies around self-driving cars. Steady progress is what many believe is a realistic scenario.5. Managed progressWe already fly in airplanes that largely fly themselves and systems all around us are largely autonomous, with self-driving cars an almost certainty. But let us not confuse intelligence with autonomy. Full autonomy that leads to catastrophe, because of willed action by AI, is a long way off. Yet autonomous systems already decide what we buy, what price we buy things at and have the power to outsmart us at every turn. Some argue that we should always be in control of such progress, even slow it down to let regulation, risk analysis and management keep pace with the potential threats.6. Runaway trainAI cou[...]

New evidence that ‘gamification’ does NOT work


Gamification is touted as new and a game changer.  Full of hyperbolic claims about its efficacy, it's not short of hyperbolic claims about increasing learning. Well it's not so new, games have been used in learning forever, from the very earleist days of computer based learning, but that’s often the way with fads, people think they’re doing ground-breaking work, when it’s been around for eons.At last we have a study that actually tests ‘gamification’ and its effect on mental performance, using cognitive tests and brain scans. The Journal of Neuroscience has just published an excellent study, in a respected, peer reviewed Journal, with the unambiguous title, ‘No Effect of Commercial Cognitive Training on Neural Activity During Decision-Making’ by Kable et al. Gamfication has no effect on learningThe researchers looked for change behaviour in 128 young adults, using pre- and post testing, before and after 10 weeks of training on gamified brain training products (Lumosity), commercial computer games and normal practice. Specifically they looked for improvements in in memory, decision-making, sustained attention or ability to switch between mental tasks. They found no improvements. “We found no evidence for relative benefits of cognitive training with respect to changes in decision-making behaviour or brain response, or for cognitive task performance.”What is clever about the study is that three groups were tested:1. Gamified quizzes (Lumosity)2. Simple computer games3. Simple practiceAll three groups, were found to have the ‘same’ level of improvement in tasks, so learning did take place but the significant word here is ‘same’, showing that brain games and gamification had no special effect. Note that the Lumosity product is gamification (not a learning game), as it has gamification elements, such as Lumosity scores, speed scores and so on, and is compared with the other two groups, one which is 'game-based' learning, controlled against a third non-gamified, non-game practice only group. One of the problems here is the overlap between gamification and game-based learning. They are not entirely mutually exclusive, as most gamification techniques have pedagogic implications and are not just motivational elements.The important point here, is the point made by the 69 scientists who orginally criticised the Luminosity product and claims, that any activity by the brain can improve performance but that does not give gamification an advantage. In fact, the cognitive effort needed to master and play the 'game' components may take more overall effort that other, simpler methods of learning.Lumosity have formLumosity are no strangers to false claims, based on dodgy neuroscience, and were fined $2m in 2015 for claiming that evidence of neuroplasticity, supported their claims on brain training. There is perhaps no other term in neuroscience that is more overused or misunderstood than 'neuroplasticity' as it is usually quoted as an excuse for going back to the old behaviourist 'blank slate' model of cogntion and learning. Luminosity, and many others, were making outrageous claims about halting dementia and Alzheimer’s disease. Sixty seven senior psychologists and neuroscientists blasted their claim and the Federal Trade Commission swung into action. The myth was literally busted.Pavlovian gamificationI have argued for some time that the claims of gamification are exaggerated and this study is the first I’ve seen that really put this to the test, with a strong methodology in a respected peer reviewed journal. This is not to say that some of aspects of gaming are not useful, for example its motivational effect, just that much of what passes for gamification is Pavlovian nonsense, backed up with spurious claims. I do think that gamification can be useful, as there are DOs and DON'Ts, but that it is often counterproduc[...]

Fractious Guardian debate: Tech in schools – money saver or waster


7 reasons why ‘teacher research' is a really bad ideaThe Guardian hosted an education debate last night. It was pretty fractious, with the panel split down the middle and the audience similarly split. On one side lay the professional lobby. who saw teachers as the only drivers of tech in schools, doing their own research and being the decision makers. On the other side were those who wanted a more professional approach to procurement, based on objective research and cost-effectiveness analysis. What I heard, was what I often hear at these events, that teachers should be the researchers, experimenters, adopting an entrepreneurial method, making judgements and determining procurement. I challenged this - robustly. Don’t teachers have enough on their plate, without taking on several of these other professional roles? Do they have the time, never mind the skills, to play all of these roles? (Thanks to Brother UK for pic.)1. Anecdote is not research To be reasonably objective in research you need to define your hypothesis, design the trial, select your sample, have a control, isolate variables and be good at gathering and interpreting the data. Do teachers have the time and skills to do this properly? Some may, but the vast majority do not. It normally requires a post-graduate degree (not in teaching) and some real research practice before you become even half good at this. I wouldn’t expect my GP to mess around with untested drugs and treatments with anecdotal evidence based on the views of GPs. I want objective research by qualified medical researchers. En passant, let me give a famous example.Learning styles (VAK or VARK) were propulgated by Neil Fleming, a teacher, who based it on little more than armchair theorising. It is still believed by the majority of teachers, desoite oodles of evidence to the contrary. This is what happens when bad teacher research spreads like a meme. It is believed because teachers rely on themselves and not objective evidence.2. Not in job descriptionBeing a ‘researcher’ is not in the job description. Teaching is hard, it needs energy, dedication and focus. By all means seek out the research and apply what is regarded as good practice, but the idea that good practice is what any individual deems it to be through their personal research is a conceit. A school is not a personal lab – it has a purpose.3. Don’t experiment on other people’s childrenThere is also the ethical issue of experimenting on other people’s children. I, as a parent, resent the idea that teachers will experiment on my children. I assume they’re at school to learn, not be the subject of the teachers' ‘research’ projects in tech.4. Category mistakeWhat qualifies a teacher to be a researcher? It’s like the word ‘Leader’, when anyone can simply call themselves a leader, it renders the word meaningless. I have no problem with teachers seeking out good research, even making judgements about what they regard as useful and practical in their school, but that’s very different from calling yourself a ‘researcher' and doing ‘research’ yourself. That’s a whole different ball park. This is a classic category mistake, shifting the meaning of a word to suit an agenda.5. EntrepreneurialThis word came up a lot. We need more start-ups companies in schools. Now that’s my world. I’m an investor, run an EdTech start-up, and, believe me, that’s the last thing you need. Most start-ups fail and you don’t want failed projects crashing around in your school. But it “teaches the kids how to be entrepreneurs said one of the panel”. No it doesn’t. Start-ups have agendas. Sure they’ll want to get into your school but don’t believe that this is about ‘research’, it’s about ‘referral’. Wait, look, assess, analyse, then try and procure.6. Teaching tech biasTechnology is an[...]

10 recommendations on HE from OEB Mid-Summit (Reykjavik)


Iceland was a canny choice for a Summit. Literally in sight of the house where Reagan and Gorbachov met in 1986 (Berlin Wall fell in 1989), it was a deep, at times edgy, dive into the future of education. When people get together and face up to rifts in opinion and talk it through – as the Reagan-Gorbachov summit showed, things happen – well maybe. Here's my ten takeaways from the event (personal).1. Haves-have notsFirst the place. Iceland has eemerged up through the Mid-Atlantic Ridge, which still runs right through the middle. Sure enough, while here, there were political rifts in the US, with the Coney-Trump farrago, and a divisive election in the UK. It is clear that an economic policies have caused fractures between the haves and the have-nots. In the UK there’s a hung Parliament, country split, Brexit negotiations loom and crisis in Northern Ireland. In the US Trump rode into Washington on a wave of disaffection and is causing chaos. But let’s not imagine that Higher Education lies above all of this. Similar fault lines emerged at this Summit. As Peter Thiel said, Higher Education is like the Catholic Church on eve of Reformation, “a priestly class of professors….people buying indulgences in the form of amassing enormous debt for the sort of the secular salvation that a diploma represents”. More significantly he claims there has been a ‘failure of the imagination, a failure to consider alternative futures’. Culture continues to trump strategy. Higher Education is a valuable feature of cultural life but people are having doubts. Has it become obese? Why have costs ballooned while delivering the same experience? There are problems around costs, quality of teaching and relevance. Indeed, could Higher Education be generating social inequalities? In the US and UK there was a perception, not without truth, that there is a gulf between an urban, economically stable, educated elite and the rest, who have been left to drift into low status jobs and a loss of hope for their children. The Federal debt held on student loans in the US has topped 1.5 trillion. In the UK, institutions simply raise fees to whatever cap they can. The building goes on, costs escalate and students loans get bigger. Unlike almost every other area of human endeavor, it seems there has been little effort to reduce costs and look for cost-effective solutions.Recommendation: HE must lower its costs and scale2. Developed-developingThe idea that the current Higher Education model should be applied to the developing world is odd, as it doesn’t seem to work that well in the developed world. Rising costs, student and/or government debts, dated pedagogy and an imbalance between the academic and vocational, renders application in the developing world at best difficult, at worse dangerous. I have been involved in this debate and it is clear that the developing world needs vocational first, academic second.Recommendation: Develop different and digital HE model for developing world3. Public-privateIn an odd session by Audrey Watters, we had a rehash of one of her blogs, about personalized learning being part of the recent rise in ‘populism’. She blamed ‘capitalism’ for everything, seeing ‘ideology’ everywhere. But as one brave participant shouted behind me “so your position is free from ideology then?” It was the most disturbing session I heard, as it confirmed my view that the liberal elite are somewhat out of touch with reality and all too ready to trot out old leftist tropes about capitalism and ideology, without any real solutions. The one question, from the excellent Valerie Hannon, stated quite simply, that she was “throwing the baby out with the bath water”. Underlying much of the debate at the summit lay an inconvenient truth that Higher Ed has a widespread and deep anti-corpo[...]

Philosophy of technology - Plato, Aristotle, Nietzsche, Heidegger - technology is not a black box


Greek dystopiaThe Greeks understood, profoundly, the philosophy of technology. In Aeschylus’s Prometheus Bound, when Zeus hands Prometheus the power of metallurgy, writing and mathematics, Prometheus gifts it to man, so Zeus punishes him, with eternal torture. This warning is the first dystopian view of technology in Western culture. Mary Shelley called Frankenstein ‘A Modern Prometheus’ and Hollywood has delivered for a nearly a century on that dystopian vision. Art has largely been wary and critical of technology.God as makerBut there is another more considered view of technology in ancient Greece. Plato articulated the philosophy of technology, seeing the world, in his Timaeus, as the work of an ‘Artisan’, in other words the universe is a created entity, a technology. Aristotle makes the brilliant observation in his Physics, that technology not only mimics nature but continues “what nature cannot bring to a finish”. They set in train an idea that the universe was made and that there was a maker, the universe as a technological creation.The following two thousand year history of Western culture bought into the myth of the universe as a piece of created technology. Paley, who formulated the modern argument for the existence of God from design, used technological imagery, the watch, to specify and prove the existence of a designed universe and therefore a designer - we call (him) God. In Natural Theology; or, Evidences of the Existence and Attributes of the Deity, he uses an argument from analogy to compare the workings of a watch with the observed movements of the planets in the solar system to conclude that it shows signs of design and that there must be a designer. Dawkins titled his book The Blind Watchmaker as its counterpoint. God as watchmaker, technologist, has been the dominant, popular, philosophical belief for two millennia. Technology, in this sense, helped generate this metaphysical deity. It is this binary separation of the subject from the object that allows us to create new realms, heaven and earth, which gets a moral patina and becomes good and evil, heaven and hell. The machinations of the pastoral heaven and fiery foundry that is hell  revealed the dystopian vision of the Greeks.Technology is the manifestation of human conceptualization and action, as it creates objects that enhance human powers, first physical then psychological. With the first hand-held axes, we turned natural materials to our own ends. With such tools we could hunt, expand and thrive, then control the energy from felled trees to create metals and forge even more powerful tools. Tools beget tools.Monotheism rose on the back of cultures in the fertile crescent of the Middle East, who literally lived on the fruits of their tool-aided labour. The spade, the plough and the scythe gave them time to reflect. Interestingly our first records, on that beautifully permanent piece of technology, the clay tablet, are largely the accounts of agricultural produce. The rise of writing and efficient alphabets make writing the technology of control. We are at heart accountants, holding everything to account, even our sins. The great religious books of accounts were the first global best sellers.Technology slew GodTechnology may have suggested, then created God, but in the end it slew him. With Copernicus, who drew upon technology-generated data, we found ourselves at some distance from the centre of the Universe, not even at the centre of our own little whirl of planets. Darwin then destroyed the last conceit, that we were unique and created in the eyes of a God. We were the product of the blind watchmaker, a mechanical, double-helix process, not a maker, reduced to mere accidents of genetic generation, the sons not of Gods but genetic mistakes.Anc[...]

10 uses for Amazon Echo in corporates


OK she’s been in my kitchen for months and I’m in the habit of asking her to give me Radio 4 while I’m making my morning coffee. Useful for music as well, especially when a tune comes into your head. But it’s usually some question I have in my head or topic I want some detail on. My wife’s getting used to hearing me talk to someone else while in another room. But what about the more formal use of Alexa in a business? Could its frictionless, hands-free, natural language interface be of use in the office environment?1. TimerHow often have you been in a meeting that’s overrun? You can set multiple timers on Alexa and she will light up and alarm you (softly) towards the end of each agenda item, say one minute before the next agenda item. It could also be useful as a timer for speakers and presenters. Ten minutes each? Set her up and she provides both visual and aural timed cues. I guess it would pay for itself at the end of the first meeting!2.  Calendar functionalityAs Alexa can be integrated with your Google calendar, you simply say, “Alexa, tell Quick Events to add go to see Tuesday 4th March at 11 a.m.". It prompts you until it has the complete scheduled entry.3. To do listsAlexa will add things to a To Do list. This could be an action list from a meeting or a personal list.4. CalculatorNeed numbers added, subtracted, multiplied, divided? You can read them in quickly and Alexa relpies quickly.5. Queries and questionsQuick questions or more detailed stuff from Wikipedia? Alexa will oblige. You can also get stock quotes and even do banking through Capital One. Expect others to follow.6. Domain specific knowledgeProduct knowledge, company specific knowledge, Alexa can be trained to respond to voice queries. Deliver a large range of text files and Alexa can find the relevant one on request.7. TrainingYou can provide text (text to speech) or your own audio briefings. Indeed, you can have as many of these as you want. Or go one step further with a quiz app that delivers audio training.8. MusicSet yourself up for the day or have some ambient music on while you work? Better still music that responds to your mood and requests, Alexa is your DJ on demand.9.  Order sandwiches, Pizza or UberAs Alexa is connected to several suppliers, you can get these delivered to your business door. Saves all of that running out for lunchtime sandwiches or pizza.10. Control office environmentYou can control your office environment through the Smart Home Skill API. This will work with existing smart home devices but there’s a developer’s kit so that you can develop your own. It can control lights, thermostats, security systems and so on.Conclusion As natural language AI applications progress, we will see these business uses become more responsive and sophisticated. This is likely to eat into that huge portion of management that the Harvard Business Review identified as admin. Beyond this are applications that deliver services, knowledge and training, specific to your organisation and you as an individual. Working on this as an application in training as we speak.[...]

AI moving towards the invisible interface


AI is the new UIWhat do the most popular online applications all have in common? They all use AI-driven interfaces. AI is the new UI. Google, Facebook, Twitter, Snapchat, Email, Amazon, Google Maps, Google Translate, Satnav, Alexa, Siri, Cortana, Netflix all use sophisticated AI to personalise in terms of filtering, relevance, convenience, time and place-sensitivity. They work because they tailor themselves to your needs. Few notice the invisible hand that makes them work, that makes them more appealing. In fact, they work because they are invisible. It is not the user interface that matters, it is the user experience.Yet, in online learning, AI UIs are rarely used. That’s a puzzle, as it is the one area of human endeavour that has the most to gain. As Black & William showed, feedback that is relevant, clear and precise, goes a long way in learning. Not so much a silver bullet as a series of well targeted rifle shots that keep the learner moving forward. When learning is sensitive to the learner’s needs in terms of pace, relevance and convenience, things progress.Learning demands attention and because our working memory is the narrow funnel through which we acquire knowledge and skills, the more frictionless the interface, the more efficient the speed and efficacy of learning. Why load the learner with the extra tasks of learning an interface, navigation and extraneous noise. We’ve seen steady progress beyond the QWERTY keyboard, designed to slow typing down to avoid mechanical jams, towards mice and touch screens. But it is with the leap into AI that interfaces are becoming truly invisble.TextlessVoice was the first breakthrough and voice recognition is only now reaching the level of reliability that allows it to be used in consumer computers, smartphones and devices in the home, like Amazon Echo and Google Home. We don’t have to learn how to speak and listen, those are skills we picked up effortlessly as young children. In a sense, we didn’t have to learn how to do these things at all, they came naturally. As bots develop the ability to engage in dialogue, they will be ever more useful in teaching and learning. AI also provides typing, fingerprint and face recognition. These can be used for personal identification, even assessment. Face recognition for ID, as well as thought diagnosis, is also advancing, as is eye movement and physical gesture recognition. Such techniques are commonly used in online services such as Google, Facebook, Snapchat and so on. But there are bigger prizes in the invisible interface game. So let's take a leapof the imagination and see where this may lead to over the next few decades.Frictionless interfacesMark Zuckerberg announced this year that he wants to get into mind interfaces, where you control computers and write straight from thought. This is an attempt to move beyond smartphones. The advantages are obvious in that you think fast, type slow. There’s already someone with a pea-sized implant that can type eight words a minute. Optical imaging (lasers) that read the brain are one possibility. There is an obvious problem here around privacy but Facebook claim to be focussing only on words chosen by the brain for speech i.e. things you were going to say anyway. This capability could also be used to control augmented and virtual reality, as well as comms to the internet of things. Underlying all of this is AI.In Sex, Lies and Brain Scans, by Sahakian and Gottwald, the advances in this area sound astonishing. John-Dylan Hayes (Max Plank Institute) can already predict intentions in the mind, with scans, to see whether the brain is about to add or subtract two numbers, or press a right or left button. Words can also be read, with Tom Mitchell (Carnegie [...]

Scrap your company values and replace with 'Don't be a dick!'... the rest is hubris


A brief conversation with a young woman, in the queue for lunch at a corporate ‘values’ day, opened my eyes up to the whole values thing in organisations. “I have my values,” she said, “and they’re not going to be changed by a HR department.... I’ll be leaving in a couple of years and no doubt their HR will have a different set of values… which I’ll also ignore”. Wisest thing I heard all day.You’ve probably had the ‘values’ treatment. Suddenly, parachuted out of HR, comes a few abstract nouns, or worse, an acronym, stating that the organisation now has some really important ‘values’. Even worse, an expensive external agency may have juiced them up. I genuinely like organisations that have a strategy, purpose, even a mission. But the current obsession with organisational values I don't buy.I also chaired a Skills Summit last month, where innumerable HR folk paraded their company values with the usual earnestness. An endless stream of abstract nouns, all of which seemed like things any normal human being would want in any context, in or out of work - you know the words - integrity, innovation, honesty, community....  After a full day of this stuff I was impressed by the guy who ran a small, successful software company, who stood at the podium, and claimed that his company didn't really have any stated values but felt that the whole 'values' thing could be replaced by one phrase 'Don't be a dick!". All company values can be substituted by this one phrase. The rest is hubris....   Bullshit BingoHaving dealt with hundreds of large organisations for more than 30 years, I have yet to find one whose values were anything more than platitudes. They are invariably a crude mixture of reactive PR, HR overreach and the crude selection from a list of abstract nouns, sometimes into an idiotic acronym. In reality - even when masked by complex consultancy reports and training - it's almost always bullshit Bingo.Why would we imagine that HR have any skills in this area? In what sense are they 'experts' in values? For me, it is a utopian view of work and organisations. I can remember the day when organisational 'value' lists never existed. People were more honest and realistic about expectations. They came in when HR suddenly decided that they had to look after our emotional and moral welfare - always a rather ridiculous idea.The banks were full of this 'values' culture. I worked with most of them. It was all puff and PR. People do not, and don't, buy into this stuff. They can barely recall what the values are. I have values and I'm not interested in what HR, or some external consultant, says my values should be. The even more ridiculous idea that people who don't adopt those values should be forced out is wrong and illegal.The problem here was a shift when HR started to become the people who protected the company against their own employees - that, for example, is what compliance training is largely about - ticking boxes in case of insurance and fines. They dress this up in ‘values’ documents but few remember them and even fewer care.... The really interesting thing about 'values' in my experience is that those companies who felt most compelled to get them identified - banks, accountancies, consultancies, tech companies, pharma companies etc. - were the very companies where they were most ignored. In fact, they were counterproductive as the employees all knew they were a scam, designed to 'police' them. Try this authenticity test to your company values. Sniff out the hubris and bullshit.Test 1: Bad acronyms - values created to fit wordIf your values set is an acronym, they’re likely to be inauthentic. The net result [...]

Snapchat’s smart pivot into an AR company but is AR ready for learning?


Augmentation, in ‘augmented’ reality, comes in all shapes, layers and forms, from bulky headsets and glasses to smartphones. At present the market has been characterised by a mix of expensive solutions (Hololens), failures (Google Glass, Snap Spectacles) and spectacular successes (Pokemon Go, Snapchat Filters). So where is all of this going?SnapchatSnapChat has pivoted, cleverly, into being not just another messenger service, but the world’s largest Augmented Reality company. Its ‘filters’, that change every day, use face recognition (AI) and layered graphics to deliver some fun stuff and more importantly, advertising. It is a clever ploy, as it plays to the personal. You can use fun filters, create your own filter with a piece of dynamic art or buy one. It’s here that they’re building an advertising and corporate business on designed filters around events and products. That’s smart and explains why their valuation is stratospheric. Once you play around with Snapchat, you get why it’s such a big deal. As usual, it’s simple, useful, personal and compelling. With over 150 million users and advertising revenue model, that works on straight ads, sponsored filters and sponsored lenses (interactive filters), it has tapped into a market that simply never existed.Snap SpectaclesSnap Spectacles was their interesting foray into the glasses augmented market – but more of a gimmick than realistic consumer product. Targeted only at Snapchat users, you can’t really wear them with regular glasses and all they do is record video – but, to be fair, they do that well. However, as with Google Glass, you feel like a bit of a twat. Not really a big impact product.HololensWith its AI driven interfaces – point head, gesture or voice recognition, it is neat but at $3000 a pop – not really a commercial proposition for Microsoft. As for the ‘experience’, the limited rectangle, that is the field of view, is disappointing, and ‘killer’ applications absent. There have been games, Skype applications, 2D & 3D visualisations but nothing yet that really blows the mind – forget the idea of Sci-fi holograms, it’s still all a bit ‘Pepper’s Ghost’ in feel, still tethered and has a long way to go before being a viable product. LeapBit of a mystery still as they are a secretive lot. Despite having raised more than $1.4 billion from Google, Alibaba and Andreessen Horowitz, it has still to deliver whatever it is that they want to deliver. Mired in technical problems, they may still pull something out of the bag with their glasses release this year – but it seems you have to wear a belt with some kit attached. Watch this space, as they say, as it is nothing but an empty space for now.Pokemon GoWe saw the way this market was really going with Pokemon Go, layers of reality on a smartphone. Photographic Camera layer, idealised graphic map layer, graphic Pokemon layer, graphic Pokestops layer, GPS layer, internet layer – all layered on to one screen into a compelling social and games experience. Your mind simply brings them altogether into one conscious, beautiful, blended reality – more importantly, it was fun. This may be where augmented reality will remain until the minaturisation of headsets and glasses get down to the level of contact lenses.AR v VRI still prefer the full punch VR immersive experience. AR, in its current form is a halfway house experience. The headset and glasses seem like a stretch for all sorts of reasons. You simply have to ask yourself, do I need all of this cost and equipment, to see a solar system float in space, when I can see it in 3D on my computer screen? Th[...]

AI fail that will make you gag with disgust……


One of the first consumer robots were vacuum cleaners that bumped around your floors sucking up dirt and dust. The first models simply sensed when they hit something, a wall or piece of furniture, turned the wheels and headed off in a different direction.The latest vacuum robots actually map out your room and mathematically calculate the optimum route to evenly vacuum the floor. They have a memory, build a mathematical model of the room, with laser mapping, 360 degree cameras, can detect objects in real time and have clever corner cleaning capability.  They move room to room, can be operated from a mobile app - scheduling and so on. They will even automatically recharge when their batteries get low and resume from the point they left. Very impressive.That’s not to say they’re perfect. Take this example, that happened to a friend of mine. He has a pert dog and sure enough, the vacuum cleaner would bump into the dog on the carpet, turn and move on. The dog was initially puzzled, sniffed it a bit, but learned to ignore this new pet, as something beneath his contempt as top-dog. Cats even like to sit on them and take rides around the house.Then, one day, the owner came back, opened his front door and was hit by a horrific wall of smell.  The dog had taken a dump and the robot cleaner had smeared the shit evenly across the entire carpet, even into the corners, room by room, with a mathematical exactitude that was superior to that of any human cleaner. The smell was overwhelming and the clean up a Herculean task on hands and knees, accompanied by regular gagging. The lesson here is that AI is smart, can replace humans in all sorts of tasks but doesn’t have the checks and balances of normal human intelligence. In fact the attribution of the word intelligence, I'd argue (and have here), is an anthropomorphic category error, taking one category and applying it in a separate and compeltely different domain. It’s good at one thing, or a few things, such as moving, mapping and sucking, but it doesn’t know when the shit hits the fan.[...]

Do bears shit in the woods? 7 reasons why data analytics a misleading, myopic use of AI in HE


I’m increasingly convinced that HE is being pulled in the wrong with its obsession with data analytics, at the expense of more fruitful uses of AI in learning. Sure it has some efficacy but the money being spent at present, may be mostly wasted.1. Bears in woodsMuch of what is being paid for here is what I’d say was answers to the question, ‘Do bears shit in the woods?’ What insights are being uncovered here? That drop-out is being caused by poor teaching and poor student support? That students with English as a second language struggle? Ask yourself whether these insights really are insights or whether they’re something everyone knew in the first place.2. You call that data?The problem here is the paucity of data. Most Universities don’t even know how many students attend lectures (few record attendance), as they’re scared of the results. I can tell you that the actual data, when collected, paints a picture of catastrophic absence. That’s the first problem – poor data. Other data sources are similarly flawed, as there's little in the way of fine-grained feedback. It's small data sets, often messy, poorly structured and not understood.3. Easier waysMuch of this so-called use of AI is like going over top of head with your right hand to scratch your left ear. Complex algorithmic approaches are likely to be more expensive and far less reliable and verifiable than simple measures like using a spreadsheet or making what little data you have available, in a digestible form, to faculty.4. Better uses of resourcesThe problem with spending all of your money on diagnosis, especially when the diagnosis is an obvious limited set of possible causes, that were probably already known, is that the money is usually better spent on treatment. Look at improving student support, teaching and learning, not dodgy diagnosis.5. Action not analyticsIn practice, when those amazing insights come through, what do institutions actually do? Do they record lectures because students with English as a foreign language find some lecturers difficult and the psychology of learning screams at us to let students have repeated access to resources? Do they tackle the issue of poor teaching by specific lecturers? Do they question the use of lectures? (Easily the most important intervention, as the research shows)  is the shift to active learning. Do they increase response times on feedback to students? Do they drop the essay as a lazy and monolithic form of assessment? Or do they waffle on about improving the ‘student experience’ where nothing much changes?6. EvaluationI see a lot of presentations about why one should do data analytics  - mostly around preventing drop-out. I don’t see much in the way of verifiable analysis that data analytics has been the actual causal factor in preventing future drop-out. I mean a cost-effectiveness analysis. This is not easy but it would convince me,7.  Myopic view of AIAI is many things and a far better use of AI in HE, is, in my opinion, to improve teaching through personalised, adaptive learning, better feedback, student support, active learning, content creation and and assessment. All of these are available right now. They address the REAL problem – teaching and learning.ConclusionTo be fair I applaud efforts from the likes of JISC to offer a data locker, so that institutions can store, share and use bigger data sets. This solves some legal problems but looks at addressing the issue of small data. But this is, as yet, a wholly unproven approach. I work in AI in learning, have an AI learning company, in[...]

AI is the new UI: 7 ways AI shapes your online experience


HAL stands for ‘Heuristically programmed ALgorithmic computer’. Turns out that HAL has become a reality. Indeed we deal with thousands of useful HALs every time we go online. Whenever you are online, you are using AI. As the online revolution has accelerated, the often invisible application of AI and algorithms has crept into a vast range of our online activities. A brief history of algorithms includes the Sumerians, Euclid, the origins of the term (Al Khwarismi), Fibonacci, Leibniz, Gauss, Laplace, Boole and Bayes but in the 21st century ubiquitous computing and the internet has taken algorithms into the homes and minds of everyone who uses the web.You’re reading this from a network, using software, on a device, all of which rely fundamentally on algorithms and AI. The vast portion of the software iceberg that lies beneath the surface, doing its clever but invisible thing, the real building blocks of contemporary computing – are algorithms and AI. Whenever you search, get online recommendations, engage with social media, buy, do online banking, online dating, see online ads; algorithms are doing their devilishly clever work.BCG’s ten most innovative companies 2016Boston Consulting Group publish this list every year:AppleGoogleTeslaMicrosoftAmazonNetflixSamsungToyota FacebookIBMNote how it is dominated by companies that deliver access and services online. Note that all, apart perhaps from Toyota, are turning themselves into AI companies. Some, such as IBM, Google and Microsoft have been explicit on this strategy. Others, such as Apple, Samsung, Netflix and Facebook have been acquiring skills and have huge research resources in AI. Note also that Tesla, albeit a car company, is really an AI company. Their cars are always on, learning robots. We are seeing a shift in technology towards ubiquitous AI.1. SearchWe have all been immersed in AI since we first started using Google. Google is AI. Google exemplifies the success of AI in having created one of the most successful companies even on the back of AI. Beyond simple search, they also enable more specific AI-driven search through Google Scholar, Google Maps and other services. Whether it is documents, videos, images, audio or maps, search has become the ubiquitous mode of access. AI is the real enabler when it comes to access. Search Engine Indexing finds needles in the world’s biggest haystack. Search for something on the web and you’re ‘indexing’ billions of documents and images. Not a trivial task and it needs smart algorithms to do it at all, never mind in a tiny fraction of a second. PageRank was the technology that made Google one of the biggest companies in the world. Google has moved on, nevertheless, the multiple algorithms that rank results when you search are very smart. We all have, at our fingertips, the ability to research and find the things that only a tiny elite had access to only 20 years ago.2. RecommendationsAmazon has built the world’s largest retail company with a raw focus on the user experience, presented by their recommendation engine. Their AI platform, Alexa, now delivers a range of services but it was made famous by its recommendations on first books, now other goods. But recommendation engines are now everywhere on the web. You are more often than not presented with choices that are pre-selected, rather than the result of a search. Netflix is a good example, where the tiling is tailored to your needs. Most social media feeds are now AI-driven, as are many online services, where, what you (and others) do, determines what you see.3. CommunicationsSiri, VIV, Cortana, Alex[...]

7 myths about AI


Big Blue beat Kasparov in 1997 but chess is thriving. AI remains our servant not master. Yet mention AI and people jump to the far end of the conceptual spectrum with big-picture, dystopian and usually exaggerated visions of humanoid robots, the singularity and the existential threat to our species. This is fuelled by cultural messages going back to the Greek Prometheus myth, propelled by Mary Shelly’s Frankenstein (subtitled The Modern Prometheus) to nearly a century of movies from Metropolis onwards that portray created intelligence as a threat. The truth is more prosaic.Work with people in AI and you’ll quickly be brought back from sci-fi visions to more practical matters. Most practical and even theoretical AI is what is called ‘weak’ or ‘narrow’ AI. It has no ‘cognitive’ ability. I blame IBM and the consultancies for this  hyperbole. There is no ‘consciousness’. IMBs Watson may have beaten the Jeopardy Champions, Google’s AlphaGO may have beaten the GO Champion – but neither knew they had won.The danger is that people over-promise and under-deliver, so that there's disappointment in the market. We need to keep a level head here and not see AI as the solution to everything. In fact, many problems need far simpler solutions.AI is the new UIAI is everywhere. You use it every day when you use Google, Amazon, social media, onine dating, Netflix, music streamming services, your mobile and any file  you create, store or open. Our online experineces are largely of AI driven services. It's just not that visible. AI is the new UI. However, there are several things we need to know about AI if we are to understand and use it well in our specific domain, and in this case it is teaching and learning.AI is ‘intelligent’AI is all about the brainAI is consciousAI is strongAI is generalAI is one thingAI doesn’t affect me 1. AI is not ‘intelligent’I have argued that the word ‘intelligent’ is misleading in AI. It pulls us toward a too anthropomorphic view of AI, suggesting that it is ‘intelligent’ in the sense of human intelligence. This, is a mistake as the word ‘intelligence is misleading. It is better to see AI in terms of general tasks and competences, not as being intrinsically intelligent, as that word is loaded with human preconceptions.‘intelligence’2. AI is not about the brainAI is coded and as such, most of it does not reflect what happens in the human brain. Even the so-called ‘neural network’ approach is loosely modelled on the networked structure of the brain. It is more analogy that replication. It’s a well work argument but we did not learn to fly by copying the flapping of birds’ wings and we didn’t learn to go faster by copying the legs of a cheetah – we invented the wheel. Similarly with AI.3. AI is not cognitiveIBMs marketing of AI as ‘cognitive technology’ is way off. Fine if they mean it can perform or mimic certain cognitive tasks but they go further, suggesting that it is in many senses ‘cognitive’. This is quite simply wrong. It has no consciousness, nor real general problem solving abilities, none of the many cognitive qualities of human minds. It is maths. This is nit necessarily a bad thing, as it is free from forgetting, racism, sexism, cognitive biases, doesn’t need to sleep, networks and doesn’t die. In other words AI is about doing things better than brains but  by other means.4. AI is weakThere is little to fear from threatening independence[...]

Elon Musk – the Bowie of Business


Having just finished Morley’s brilliant biography of Bowie, it struck me that Musk is the Bowie of business. Constantly reinventing himself; Paypal hero, Tesla road warrior, Solar City sungod, Starman with Space X and now the sci-fi Hyperloop hipster- and he’s still only in his forties. Strange fact this but the first Tesla car was codenamed DarkStar.But let’s not stretch Bowies leg warmers too far. Ashlee Vance’s biography of Elon Musk is magnificent for mostly other reasons. It’s about Musk the man, his psychology. There’s a manic intensity to Musk, but it’s directed, purposeful and, as Vance says, it’s not about making money. Time and time again he puts everything he’s made into the next, even weirder and riskier project. Neither is he a classic business guy or entrepreneur. For him questions come first and everything he does is about finding answers. He despises the waste of intellect that gets sucked into the law and finance, as he’s a child of the Enlightenment and sees as his destiny the need to accelerate progress. He doesn’t want to oil the wheels, he wants to drive, foot to the metal, the fastest electric car ever made then ride a rocket all the way to Mars. As he says, he wants to die there – just not on impact. Always on the edge of chaos, like a kite that does its best work when it stalls and falls but then it soars.Time and time again experience tells me, and I read, about actual leaders who bear no resemblance to the utopian model presented by the bandwagon ‘Leadership’ industry. The one exception is Stanford’s Pfeffer, who also sees the leadership industry as peddling unreal, utopian platitudes. Musk has a string of business successes behind him, including PayPal, and is the major shareholder in three massive, public companies, all of which are innovative, successful and global. He has taken on the aerospace, car and energy industries at breathtaking speed, with mind-blowing innovation. Yet he is known to be mercurial, cantankerous, eccentric, mean, capricious, demanding, blunt, delivers vicious barbs, swears like a trooper, takes things personally, lacks loyalty and has what Vance calls a ‘cruel stoicism’ –all of these terms taken from the book. He demands long hours and devotion to the cause and is cavalier in firing people. “Working at Tesla was like being Kurtz in Apocalypse Now”. So, for those acolytes of ‘Leadership’ and all the bullshit that goes with that domain, he breaks every damn rule – then again so do most of them – in fact that’s exactly why they succeed. They’re up against woozies who believe all that shit about leading from behind. So why are people loyal to him and why does he attract the best talent in the field? Well, he has vision. He also has a deep knowledge of technology, is obsessive about detail, takes rapid decisions, doesn’t like burdensome reports and bureaucracy, likes shortcuts and is a fanatic when it comes to keeping costs down. Two small asides – he likes people to turn up at the same time in the morning and hates acronyms. I like this. His employees are not playing pool or darts mid-morning and don’t lie around being mindful on brightly coloured bean bags. It’s relentless brainwork to solve problems against insane deadlines. You may disagree but he does think that it is only technology that will deliver us from climate change, the dependence on oil and allow us to inhabit planets other than our own and his businesses form a nexus of energy production, storage and utilisation that, he th[...]

Corbyn is right - proud that my fat cat pay campaign against the CIPD paid off


Corbyn has come under attack for his wibbly-wobbly performance on top pay. Oh how some liberals I know, who pretend to care about the poor, mocked him on social media. But for me, he was on the money. To my mind, this needs urgent attention. First use the lever of public funding and contracts, second use ratios (around 20:1 with checks on range), along with caps.The blog is mightier than the swordNot a lot of people know this (to be said in Michael Caine accent) but in 2010 I was single-handedly responsible for slashing fat cat pay in a major institution, through blogging. It was the CIPD. I read the accounts and pointed out that the CEO salary was just short of £500k. Not bad for an organisation whose commercial revenue had plummeted (down 23%), research contracted and ridiculed (down 57% & report pulled), magazine imploded (down 83%), investment returns bombed (down 74.7%) and a membership who were angry and alienated about a command and control culture that left them with less services and starved of cash. There was also the issue of an odd and overpriced acquisition (Bridges Communications) and a blatant falsehood on her CV on the CIPD website. It seemed outrageous that the leading organisation in 'personnel', that often opines on exective pay, should be taking such liberties. You can read this in detail here.This caused a shitstorm. The CIPD Chair chipped in to defend the claim but it simply exposed the fact that the fat cat stuff had been going on for years, when it was shown that the previous CEO, Geoff Armstrong, had also eared a cool half a million. You can read the ludicrous defence here.What happened next was comical. Personnel Today picked up on my Jackie Orme story and laid out the case with a link to my blog along with an official response from the CIPD and got a survey going (see story here). Does CIPD CEO deserve £87,000 bonus? Result : NO 94%, YES 6%! Pretty conclusive and things changed very quickly. To cut a long story short, the current CEO of the CIPD, Peter Cheese, earned £250k last year. Result.This is one of the reasons I never ever joined the CIPD and never will. I have a healthy distruct of membership organisations that usually turn into not serving their emmbers but the staff of the institution itself. Needless to say, from that day the CIPD has never invited me to any event or conference or involved me in any project. It was worth it. Call them out - it can work as they hate the publicity. The moral of this story, is to use the power of the pen to attack these people personally and the Remuneration committees that support these extortionate salaries (and bonuses) (and other benefits). Believe me, it is extortion. I’ve been on these boards. It is literally extortion from the public purse.UniversitiesThe first target should be the Universities. The pay at the top has sprinted ahead of the pack. Last year the Russell group got an average 6% pay rise taking their average annual package to an average of £366,000. All of this on the back of the widespread and indefensible exploitation of part-time and low paid teaching staff. This is a disgrace. There’s also the issue of minimum wages right at the bottom. Don’t imagine for one minute that academe is in any sense a beacon of equality or morals. They’re rapacious at the top. Given that they receive huge amounts of public money, a large chunk through student fees, which is in effect government backed loans, which they are not responsible for collecting, we have an easy lever here. Get those ratios wo[...]

The future of parenthood through AI – meet Aristotle Mattel's new parent bot


Parents obviously play an important role in bringing up and educating their children. But it’s fraught with difficulties and pitfalls. Parents tend to face this this, for the first time, without much preparation, and most would admit to botching at least some of it along the way. Many parents may work hard and don’t have as much time with their children as they’d like. Few escape from the inevitable conflicts over expectations, homework, diet, behavior and so on. So what role could AI play in all this?AI wedgeDomestic broadband was the first edge of the wedge. Smartphones, tablets and laptops were suddenly in the hands of our children, which they lapped up with a passion. Now with with the introduction of smart, voice activated devices into the home, a new proxy parent may have arrived, devices that listen understand and speak back, even perform tasks.Enter AristotleEnter Aristotle, Mattel’s $300 Aristotle assistant. They may have called it Aristotle as both his parents died when he was young, that he was the able teacher of Alexander the Great or, that Aristotle set the whole empirical, scientific tradition that led to AI going. To be honest, what’s far more likely, is that it sounds Greek, classical and authoritative. (Aristotle's view on education here).It’s a sort of Amazon Echo or Google Home for kids, designed for their bedrooms. To be fair, the baby alarm has been around for a long time, so tech has been playing this role in some fashion, for a some time, largely giving parents peace of mind. It is inevitable that such devices get smarter. By smart, I mean several things. First it uses voice, to both listen and respond. That’s good. I’ve noticed, in using Amazon Echo, how much I’ve had to speak carefully and precisely to get action (see my thoughts on Echo here). There may come a time when early language development, which we know is important in child development, could be enhanced by such AI companions. It may also encourage listening skills. Secondly, it may encourage and satisfy curiosity. These devices are endlessly patient. They don’t get tired, grumpy, are alert and awake 24/7 and will get very smart. Thirdly, they may enhance parenthood in ways we have yet to imagine.ChildOne aspect of the technology that does appeal is its personalized voice recognition. It knows the child’s voice. This could be useful. One area that could lessen embarrassment on both sides is timely sex education and advice. This could satisfy the child’s natural curiosity without the angst that child-parent communications could involve, as long as the child knows it is confidential and the parent is in control. As the child gets older, got a dispute over a fact? Amazon Echo or an Aristotle, may sort it out. Stuck with your homework, these devices will inevitably be able to help. There’s already an app, Photomaths, the app that students love and teachers hate, that you simply point at a mathematics problem, and it not only gives you the answer but all the steps in between. Few parents would be able to do this. Similarly with other subjects and languages. There’s no reason why the knowledge of the parent should limit the ability of a child to learn. The important thing is not to let such devices become substitutes for the horrific Tiger Mom experiences, hot-housing kids with endless exercises. Learning could be done in a measured fashion. And what parent wouldn’t want such a device to become an alarm, especially on school days?ParentThe Aros[...]

First flirtations with Alexa (Amazon Echo)…. 10 interesting uses in learning


I’ve spent the last few years working with AI, with WildFire and CogBooks, but here it is, in my home folks, on my kitchen table. First impressions? Looks good, a Space Odyssey-like column in black and my first encounter is quietly weird – a moving circle of blue light, a HAL-like halo appears, then a female voice. I say 'Hello', she says 'Hi'. I’m non-plussed by the gender voice farrago – a sort of ‘damned of you don’t, damned if you do’ argument. What I’m really interested in, is whether it is of any use. So a few ‘get to know you’ questions seemed appropriate.Where are you?I’m here and my head is in the cloud.Where do you come from?I am designed by Amazon. They’re based in Seattle, Washinton.Are you a man or a woman?I’m female in character.Volume was a little low so… Turn up volume. She did!How tall are you?I’m just over 232 millimeters.How wide are you?My diameter is 7.6 centimeters.What weight are you?I’m just over 1 kilogram.What do you like?I like to read.How do you feel today?I’m very well thanks.Ok, let’s up the ante – everybody does this and I have to say, it felt a little  transgressive… I sware at her....That’s not very nice to say.Ok, tell me a jokeWhat does a house wear? A dress.Several fairly anodyne jokes later….OK, enough of the small talk…First up… let’s not to compare Alexa to a human. It’s all too easy to do the ‘but she can’t do this… or that…’ thing. I’m not looking for a life companion, or a friend – I want to see if she’s useful. This is the first time I’ve used voice recognition in anger, woven into my life, so I’m keen to focus, not on problems but potential. So far, the voice recognition is damn good. I have a strong accent, that doesn’t throw her, and variations on the phrasing of questions seem to work (not always). There's a real problem with near-sounding homophones, but you learn to be more precise in your pronunciation. Next line of enquiry, ‘time’.TimeYou can ask it the time or date, even holiday dates, number of days until a holiday and so on.  The sort of practical stuff we all need.What time is it?Bang on.What date is it?Day of the week and date.When is Burn’s Night?Burns Night will be on Wednedsay 25 January 2017.How many days to Burn’s Night?There are 19 days until Burn’s Night.The timer functions are also neat, as these are often annoyingly fiddly on your cooker or alarm clock. How often do you pop something in the oven and either ‘look to check’ or suddenly smell the charred remains? Set a timer for 10 minutesSet a second timer for 20 minutesHow much time is left on my timer?Then there are the alarm functions.Set alarm for 7.30 tomorrow morning.All good, just ask, it confirms the time – done. Beyond this, she integrates with Google Calendar, reminding you of what you have to do today, tomorrow…To do listsTo do lists are neat. I use a small notebook but for household stuff, a shopping list or to do list in the kitchen is neat. We can all add to the list. My gut feel, however, is that this will go the way of the chalkboard – unloved and unused.OK, let’s pause, as future uses are starting to emerge….Use 1 – Work and Personal AssistantOnly a start but I can already see this being used in organisations, sitting on the meeting room table, with alarms set for 30 mins, 45 mins and five mins, in an hour long meeting. Once fully developed, it could be an ideal resource in meetings for com[...]

AI isn’t a prediction, it’s happening. How fast - what do the experts say?


AI predictions have been notoriously poor in the past. Speculation that machines will transcend man has been the subject of speculative thought for millennia. We can go all the way back to the Prometheus Bound, y Aeschylus, where the God Prometheus is shackled to a rock, his liver eaten by an eagle for eternity. His crime? To have given man fire, and knowledge of writing, mathematics, astronomy, agriculture, and medicine. The Greeks saw this as a topic of huge tragic import – the idea that we had the knowledge and tools to challenge the Gods. It was a theme that was to recur, especially in the Romantic period, with Goethe, Percy Blyth Shelly and Mary Shelley, who called her book Frankenstein ‘The Modern Prometheus’. Remember that Frankenstein is not the created monster but his creator.Prometheus myth, in one of the oldest Greek tragedies we have, In many ways the rate of prediction is still largely in this romantic tradition – one that values idle thought  and commentary over reason.There is something fascinating about prediction in AI, as that is what the field purports to do – the predictive analytics embedded in consumer services, such as Google, Amazon and Netflix have been around for a long time. Their invisible hand has been guiding your behaviour, almost without notice. So what doe the field of AI have to say about itself? Putting aside Kurweil’s (2005) singularity, as being too crude and singularly utopian, there have been some significant surveys of experts in the field.Survey 1: 1973One of the earliest surveys was with 67 AI experts in 1973, by the famous AI researcher Donald Mitchie. He asked how many years it would be until we would see “computing exhibiting intelligence at adult human level”.As you can see there was no great rush towards hype at that time. Indeed, the rise in 1993 turned out to be reasonable, as by that time there had been significant advances and 2023 and beyond, seems not too unreasonable, even now.Survey 2: 2006Jumping to Moor (2006), a survey was taken at Dartmouth College, on the 50th anniversary of the famous AI conference organised by John McCarthy in 1956, where the modern age of AI started:Again, we see a considerable level of scepticism, including substantial numbers of complete sceptics who answered 'Never' to these questions.Survey 3: 2011Baum et al. (2011) took the Turing Test as their benchmark, and surveyed on the ability to pass Turing tests with the following results: 50% probability:   Third grade - 2030  Turing test - 2040  Nobel research – 2045Given the fact that the Todai project got a range of AI techniques to pass the Tokyo University entrance exam, this may seem like an underestimate of progress.Survey 4: 2014The most recent, serious attempt, where all of the above data was summarised of you want more detail, was by Muller and Bostrum (2014). Their conclusion, based on surveying four groups of experts, was:“The median estimate of respondents was for a one in two chance that high- level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.”So still some way off, 50:50 on around 30 years and getting more certain for 50 years hence.A particularly [...]

2016 was the smartest year ever


For me, 2016 was the year of AI. It went from an esoteric subject to a topic you’d discuss down the pub. In lectures on AI in learning around the world in New Zealand, Australia, Canada, US, UK and around Europe, I could see that this was THE zeitgeist topic of the year. More than this, things kept happening that made it very real….1. AI predicts a Trump winOne particular instance was a terrifying epiphany. I was teaching on AI at the University of Philadelphia, on the morning of the presidential election, and showed AI predictions which pointed to a Trump win. Oh how they laughed – but the next morning confirmed my view that old media and pollsters were stuck in an Alexander Graham Bell world of telephone polls, while the world had leapt forward to data gathering from social media and other sources. They were wrong because they don’t understand technology. It’s their own fault, as they have an in-built distaste for new technology, as it’s seen as a threat. At a deeper level, Trump won because of technology. The deep cause ‘technology replacing jobs’ has already come to pass. It was always thus. Agriculture was mechanised and we moved into factories, factories were automated and we moved into offices. Offices are now being mechanised and we’ve nowhere to go. AI will be the primary political, economic and moral issue for the next 50 years.2. AI predicts a Brexit winOn the same basis, using social media data predictions, I predicted a Brexit win. The difference here, was that I voted for Brexit. I had a long list of reasons - democratic, fiscal, economic and moral – but above all, it had become obvious that the media and traditional, elitist commentators had lost touch with both the issues and data. A bit surprised at the abuse I received, online and face-to-face, but the underlying cause, technology replacing meaningful jobs has come to pass in the UK also. We can go forward in death embrace with the EU or create our own future. I chose the latter.3. Tesla timesI sat in my mate Paul Macilveney’s Tesla (he has one of only two in Northern Ireland), while it accelerated (silently) pushing my head back into the passenger seat. It was a gas without the gas. On the dashboard display I could see two cars ahead and vehicles all around the car, even though they were invisible to the driver. Near the end of the year we saw a Tesla car predict an accident between two other unseen cars, before it happened. But it was when Paul took his hands off the steering wheel, as we cambered round the corner of a narrow road in Donegal, that the future came into focus. In 2016, self-driving cars became real, and inevitable. The car is now a robot in which one travels. It has agency. More than this, it learns. It learns your roads, routes and preferences. It is also connected to the internet and that learning, the mapping of roads, is shared with all as yet unborn cars.4. AI on tapAs the tech giants motored ahead with innumerable acquisitions and the development of major AI initiatives, some even redefining themselves as AI companies (IBM and Google), it was suddenly possible to use their APIs to do useful things. AI became a commodity or utility – on tap. That proved useful, very useful in starting a business.5. OpenAIHowever, as an antidote, to the danger that the tech monsters will be masters of the AI universe, Elon Musk started OpenAI. This is already proving to be a useful [...]