Subscribe: Donald Clark Plan B
Added By: Feedage Forager Feedage Grade A rated
Language: English
back  data  good  google  human  it’s  learning  people  research  technology  things  time  values  word  work   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Donald Clark Plan B

Donald Clark Plan B

What is Plan B? Not Plan A!

Updated: 2017-07-20T15:53:19.650+00:00


Is gender inequality in technology a good thing?


I’ve just seen two talks back to back. The first was about AI, where the now compulsory first question came from the audience ‘Why are there so few women in IT?’ It got a rather glib answer, to paraphrase - if only we tried harder to overcome patriarchal pressure on girls to take computer science, there would be true gender balance. I'm not so sure.This was followed by an altogether different talk by Professor Simon Baron-Cohen (yes – brother of) and Adam Feinstein, who gave a fascinating talk on autism and why professions are now getting realistic about the role of autism and its accompanying gender difference in employment.Try to spot the bottom figure within the coloured diagram.This is just one test for autism, or being on what is now known as the ‘spectrum’. Many more guys in the audience got it than women, despite there being more women than men in the audience. Turns out autism is not so much a spectrum as a constellation.Baron-Cohen’s presentation was careful, deliberate and backed up by citations. First, autism is genetic, runs in families, and if you test people who have been diagnosed as autistic, their parents tend to do the sort of jobs they themselves are suited to do – science, engineering, IT and so on. But the big statistic is that autism in all of its forms is around four times more common in males than females. In other words the genetic components have a biologically sex-based component.Both speakers then argued for neurodiversity, rather like biodiversity, a recognition that we’re different but also that there these differences may be also be sexual. Adam Feinstein, who has an autistic son, has written a book on autism and employment, and appealed for recognition of the fact that those with autistic skills are also good at science, coding and IT. This is because they are good at localised skills, especially attention to detail. This is very useful in lab work, coding and IT. Code is like uncooked spaghetti, it doesn’t bend, it breaks, and you have to be able to spot exactly where and why it breaks. Some employers, such as SAP and other tech companies, have now established pro-active recruitment of those on the spectrum (or constellation). This will mean that they are likely to employ more men than women. Now here’s the dilemma. What this implies is that to expect a 50:50 outcome is hopelessly utopian. In other words, if you want equality of outcome (not opportunity), in terms of gender, that is unlikely. One could argue that the opening up of opportunities to people with autism in technology has been a good thing. Huge numbers of people have and will be employed in these sectors who may not have the same opportunities in the past. But equality and diversity clash here. True diversity may be the recognition of the fact that all of us are not equal.[...]

20 (some terrifying) thought experiments on the future of AI


A slew of organisations have been set up to research and allay fears aroung AI. The Future of Life Institute in Boston, the Machine Intelligence Research Institute in Berkeley, the Centre for Study of Existential risk in Cambridge and the Future of Humanity Institute in Oxford, all research and debate the checks that may be necessary to deal with the opportunities and threats that AI brings.This is hopeful, as we do not want to create a future that contains imminent existential threats, some known, some unknown. This has been framed as a sense-check but some see it as a duty. For example, they argue that worrying about the annihilation of all unborn humans is a task of greater moral import than worrying about the needs of all those who are living. But what are the possible futures?1. UtopianCould there not be a utopian future, where AI solves the complex problems that currently face us? Climate change, reducing inequalities, curing cancer, preventing dementia & Alzheimer disease, increasing productivity and prosperity – we may be reaching a time where science as currently practices cannot solve these multifaceted and immensely complex problems. We already see how AI could free us from the tyranny of fossil fuels with electric, self-driving cars and innovative battery and solar panel technology. AI also shows signs of cracking some serious issues in health on diagnosis and investigation. Some believe that this is the most likely scenario and are optimistic about us being able to tame and control the immense power that AI will unleash.2. DystopianMost of the future scenarios represented in culture, science fiction, theatre or movies, is dystopian, from the Prometheus myth, to Frankenstein and on to Hollywood movies. Technology is often framed as an existential threat and in some cases, such as nuclear weapons and the internal combustion engine, with good cause. Many calculate that the exponential rate of change will produce AI within decades or less, that poses a real existential threat. Stephen Hawking, Elon Musk, Peter Thiel and Bill gates have all heightened our awareness of the risks around AI.3. Winter is comingThere have been several AI winters, as the hyperbolic promises never materialise and the funding dried up. From 1956 onwards AI has had its waves of enthusiasm, followed by periods of inaction, summers followed by winters. Some also see the current wave of AI as overstated hype and predict a sudden fall or realisation that the hype has been blown up out of all proportion to the reality of AI capability. In other words, AI will proceed in fits and starts and will be much slower to realise its potential than we think.4. Steady progressFor many, however, it would seem that we are making great progress. Given the existence of the internet, successes in machine learning, huge computing power, tsunamis of data from the web and rapid advances across abroad front of applications resulting in real successes, the summer-winter analogy may not hold. It is far more likely that AI will advance in lots of fits and starts, with some areas advancing more rapidly than others. We’ve seen this in NLP (Natural Language Processing) and the mix of technologies around self-driving cars. Steady progress is what many believe is a realistic scenario.5. Managed progressWe already fly in airplanes that largely fly themselves and systems all around us are largely autonomous, with self-driving cars an almost certainty. But let us not confuse intelligence with autonomy. Full autonomy that leads to catastrophe, because of willed action by AI, is a long way off. Yet autonomous systems already decide what we buy, what price we buy things at and have the power to outsmart us at every turn. Some argue that we should always be in control of such progress, even slow it down to let regulation, risk analysis and management keep pace with the potential threats.6. Runaway trainAI could be a runaway train that moves faster than our ability to control through restrictions and regulations, what needs to be held back or [...]

New evidence that ‘gamification’ does NOT work


Gamification is touted as new and a game changer.  Full of hyperbolic claims about its efficacy, it's not short of hyperbolic claims about increasing learning. Well it's not so new, games have been used in learning forever, from the very earleist days of computer based learning, but that’s often the way with fads, people think they’re doing ground-breaking work, when it’s been around for eons.At last we have a study that actually tests ‘gamification’ and its effect on mental performance, using cognitive tests and brain scans. The Journal of Neuroscience has just published an excellent study, in a respected, peer reviewed Journal, with the unambiguous title, ‘No Effect of Commercial Cognitive Training on Neural Activity During Decision-Making’ by Kable et al. Gamfication has no effect on learningThe researchers looked for change behaviour in 128 young adults, using pre- and post testing, before and after 10 weeks of training on gamified brain training products (Lumosity), commercial computer games and normal practice. Specifically they looked for improvements in in memory, decision-making, sustained attention or ability to switch between mental tasks. They found no improvements. “We found no evidence for relative benefits of cognitive training with respect to changes in decision-making behaviour or brain response, or for cognitive task performance.”What is clever about the study is that three groups were tested:1. Gamified quizzes (Lumosity)2. Simple computer games3. Simple practiceAll three groups, were found to have the ‘same’ level of improvement in tasks, so learning did take place but the significant word here is ‘same’, showing that brain games and gamification had no special effect. Note that the Lumosity product is gamification (not a learning game), as it has gamification elements, such as Lumosity scores, speed scores and so on, and is compared with the other two groups, one which is 'game-based' learning, controlled against a third non-gamified, non-game practice only group. One of the problems here is the overlap between gamification and game-based learning. They are not entirely mutually exclusive, as most gamification techniques have pedagogic implications and are not just motivational elements.The important point here, is the point made by the 69 scientists who orginally criticised the Luminosity product and claims, that any activity by the brain can improve performance but that does not give gamification an advantage. In fact, the cognitive effort needed to master and play the 'game' components may take more overall effort that other, simpler methods of learning.Lumosity have formLumosity are no strangers to false claims, based on dodgy neuroscience, and were fined $2m in 2015 for claiming that evidence of neuroplasticity, supported their claims on brain training. There is perhaps no other term in neuroscience that is more overused or misunderstood than 'neuroplasticity' as it is usually quoted as an excuse for going back to the old behaviourist 'blank slate' model of cogntion and learning. Luminosity, and many others, were making outrageous claims about halting dementia and Alzheimer’s disease. Sixty seven senior psychologists and neuroscientists blasted their claim and the Federal Trade Commission swung into action. The myth was literally busted.Pavlovian gamificationI have argued for some time that the claims of gamification are exaggerated and this study is the first I’ve seen that really put this to the test, with a strong methodology in a respected peer reviewed journal. This is not to say that some of aspects of gaming are not useful, for example its motivational effect, just that much of what passes for gamification is Pavlovian nonsense, backed up with spurious claims. I do think that gamification can be useful, as there are DOs and DON'Ts, but that it is often counterproductive.ConclusionThe problem here is that, in the case of Lumosity, tens of millions are being 'duped' into buying a subscription product t[...]

Fractious Guardian debate: Tech in schools – money saver or waster


7 reasons why ‘teacher research' is a really bad ideaThe Guardian hosted an education debate last night. It was pretty fractious, with the panel split down the middle and the audience similarly split. On one side lay the professional lobby. who saw teachers as the only drivers of tech in schools, doing their own research and being the decision makers. On the other side were those who wanted a more professional approach to procurement, based on objective research and cost-effectiveness analysis. What I heard, was what I often hear at these events, that teachers should be the researchers, experimenters, adopting an entrepreneurial method, making judgements and determining procurement. I challenged this - robustly. Don’t teachers have enough on their plate, without taking on several of these other professional roles? Do they have the time, never mind the skills, to play all of these roles? (Thanks to Brother UK for pic.)1. Anecdote is not research To be reasonably objective in research you need to define your hypothesis, design the trial, select your sample, have a control, isolate variables and be good at gathering and interpreting the data. Do teachers have the time and skills to do this properly? Some may, but the vast majority do not. It normally requires a post-graduate degree (not in teaching) and some real research practice before you become even half good at this. I wouldn’t expect my GP to mess around with untested drugs and treatments with anecdotal evidence based on the views of GPs. I want objective research by qualified medical researchers. En passant, let me give a famous example.Learning styles (VAK or VARK) were propulgated by Neil Fleming, a teacher, who based it on little more than armchair theorising. It is still believed by the majority of teachers, desoite oodles of evidence to the contrary. This is what happens when bad teacher research spreads like a meme. It is believed because teachers rely on themselves and not objective evidence.2. Not in job descriptionBeing a ‘researcher’ is not in the job description. Teaching is hard, it needs energy, dedication and focus. By all means seek out the research and apply what is regarded as good practice, but the idea that good practice is what any individual deems it to be through their personal research is a conceit. A school is not a personal lab – it has a purpose.3. Don’t experiment on other people’s childrenThere is also the ethical issue of experimenting on other people’s children. I, as a parent, resent the idea that teachers will experiment on my children. I assume they’re at school to learn, not be the subject of the teachers' ‘research’ projects in tech.4. Category mistakeWhat qualifies a teacher to be a researcher? It’s like the word ‘Leader’, when anyone can simply call themselves a leader, it renders the word meaningless. I have no problem with teachers seeking out good research, even making judgements about what they regard as useful and practical in their school, but that’s very different from calling yourself a ‘researcher' and doing ‘research’ yourself. That’s a whole different ball park. This is a classic category mistake, shifting the meaning of a word to suit an agenda.5. EntrepreneurialThis word came up a lot. We need more start-ups companies in schools. Now that’s my world. I’m an investor, run an EdTech start-up, and, believe me, that’s the last thing you need. Most start-ups fail and you don’t want failed projects crashing around in your school. But it “teaches the kids how to be entrepreneurs said one of the panel”. No it doesn’t. Start-ups have agendas. Sure they’ll want to get into your school but don’t believe that this is about ‘research’, it’s about ‘referral’. Wait, look, assess, analyse, then try and procure.6. Teaching tech biasTechnology is an integral part of a school. But it is a mistake to focus solely on ‘teching tech’. There are three types of technology in schools:Sch[...]

10 recommendations on HE from OEB Mid-Summit (Reykjavik)


Iceland was a canny choice for a Summit. Literally in sight of the house where Reagan and Gorbachov met in 1986 (Berlin Wall fell in 1989), it was a deep, at times edgy, dive into the future of education. When people get together and face up to rifts in opinion and talk it through – as the Reagan-Gorbachov summit showed, things happen – well maybe. Here's my ten takeaways from the event (personal).1. Haves-have notsFirst the place. Iceland has eemerged up through the Mid-Atlantic Ridge, which still runs right through the middle. Sure enough, while here, there were political rifts in the US, with the Coney-Trump farrago, and a divisive election in the UK. It is clear that an economic policies have caused fractures between the haves and the have-nots. In the UK there’s a hung Parliament, country split, Brexit negotiations loom and crisis in Northern Ireland. In the US Trump rode into Washington on a wave of disaffection and is causing chaos. But let’s not imagine that Higher Education lies above all of this. Similar fault lines emerged at this Summit. As Peter Thiel said, Higher Education is like the Catholic Church on eve of Reformation, “a priestly class of professors….people buying indulgences in the form of amassing enormous debt for the sort of the secular salvation that a diploma represents”. More significantly he claims there has been a ‘failure of the imagination, a failure to consider alternative futures’. Culture continues to trump strategy. Higher Education is a valuable feature of cultural life but people are having doubts. Has it become obese? Why have costs ballooned while delivering the same experience? There are problems around costs, quality of teaching and relevance. Indeed, could Higher Education be generating social inequalities? In the US and UK there was a perception, not without truth, that there is a gulf between an urban, economically stable, educated elite and the rest, who have been left to drift into low status jobs and a loss of hope for their children. The Federal debt held on student loans in the US has topped 1.5 trillion. In the UK, institutions simply raise fees to whatever cap they can. The building goes on, costs escalate and students loans get bigger. Unlike almost every other area of human endeavor, it seems there has been little effort to reduce costs and look for cost-effective solutions.Recommendation: HE must lower its costs and scale2. Developed-developingThe idea that the current Higher Education model should be applied to the developing world is odd, as it doesn’t seem to work that well in the developed world. Rising costs, student and/or government debts, dated pedagogy and an imbalance between the academic and vocational, renders application in the developing world at best difficult, at worse dangerous. I have been involved in this debate and it is clear that the developing world needs vocational first, academic second.Recommendation: Develop different and digital HE model for developing world3. Public-privateIn an odd session by Audrey Watters, we had a rehash of one of her blogs, about personalized learning being part of the recent rise in ‘populism’. She blamed ‘capitalism’ for everything, seeing ‘ideology’ everywhere. But as one brave participant shouted behind me “so your position is free from ideology then?” It was the most disturbing session I heard, as it confirmed my view that the liberal elite are somewhat out of touch with reality and all too ready to trot out old leftist tropes about capitalism and ideology, without any real solutions. The one question, from the excellent Valerie Hannon, stated quite simply, that she was “throwing the baby out with the bath water”. Underlying much of the debate at the summit lay an inconvenient truth that Higher Ed has a widespread and deep anti-corporate culture. This means that the public and private sectors talk past, and not to, each other. This is a real problem in EdTech. Until [...]

Philosophy of technology - Plato, Aristotle, Nietzsche, Heidegger - technology is not a black box


Greek dystopiaThe Greeks understood, profoundly, the philosophy of technology. In Aeschylus’s Prometheus Bound, when Zeus hands Prometheus the power of metallurgy, writing and mathematics, Prometheus gifts it to man, so Zeus punishes him, with eternal torture. This warning is the first dystopian view of technology in Western culture. Mary Shelley called Frankenstein ‘A Modern Prometheus’ and Hollywood has delivered for a nearly a century on that dystopian vision. Art has largely been wary and critical of technology.God as makerBut there is another more considered view of technology in ancient Greece. Plato articulated the philosophy of technology, seeing the world, in his Timaeus, as the work of an ‘Artisan’, in other words the universe is a created entity, a technology. Aristotle makes the brilliant observation in his Physics, that technology not only mimics nature but continues “what nature cannot bring to a finish”. They set in train an idea that the universe was made and that there was a maker, the universe as a technological creation.The following two thousand year history of Western culture bought into the myth of the universe as a piece of created technology. Paley, who formulated the modern argument for the existence of God from design, used technological imagery, the watch, to specify and prove the existence of a designed universe and therefore a designer - we call (him) God. In Natural Theology; or, Evidences of the Existence and Attributes of the Deity, he uses an argument from analogy to compare the workings of a watch with the observed movements of the planets in the solar system to conclude that it shows signs of design and that there must be a designer. Dawkins titled his book The Blind Watchmaker as its counterpoint. God as watchmaker, technologist, has been the dominant, popular, philosophical belief for two millennia. Technology, in this sense, helped generate this metaphysical deity. It is this binary separation of the subject from the object that allows us to create new realms, heaven and earth, which gets a moral patina and becomes good and evil, heaven and hell. The machinations of the pastoral heaven and fiery foundry that is hell  revealed the dystopian vision of the Greeks.Technology is the manifestation of human conceptualization and action, as it creates objects that enhance human powers, first physical then psychological. With the first hand-held axes, we turned natural materials to our own ends. With such tools we could hunt, expand and thrive, then control the energy from felled trees to create metals and forge even more powerful tools. Tools beget tools.Monotheism rose on the back of cultures in the fertile crescent of the Middle East, who literally lived on the fruits of their tool-aided labour. The spade, the plough and the scythe gave them time to reflect. Interestingly our first records, on that beautifully permanent piece of technology, the clay tablet, are largely the accounts of agricultural produce. The rise of writing and efficient alphabets make writing the technology of control. We are at heart accountants, holding everything to account, even our sins. The great religious books of accounts were the first global best sellers.Technology slew GodTechnology may have suggested, then created God, but in the end it slew him. With Copernicus, who drew upon technology-generated data, we found ourselves at some distance from the centre of the Universe, not even at the centre of our own little whirl of planets. Darwin then destroyed the last conceit, that we were unique and created in the eyes of a God. We were the product of the blind watchmaker, a mechanical, double-helix process, not a maker, reduced to mere accidents of genetic generation, the sons not of Gods but genetic mistakes.Anchors lost, we were adrift, but we humans are a cunning species. We not only make things up, we make things and make things happen.We are [...]

10 uses for Amazon Echo in corporates


OK she’s been in my kitchen for months and I’m in the habit of asking her to give me Radio 4 while I’m making my morning coffee. Useful for music as well, especially when a tune comes into your head. But it’s usually some question I have in my head or topic I want some detail on. My wife’s getting used to hearing me talk to someone else while in another room. But what about the more formal use of Alexa in a business? Could its frictionless, hands-free, natural language interface be of use in the office environment?1. TimerHow often have you been in a meeting that’s overrun? You can set multiple timers on Alexa and she will light up and alarm you (softly) towards the end of each agenda item, say one minute before the next agenda item. It could also be useful as a timer for speakers and presenters. Ten minutes each? Set her up and she provides both visual and aural timed cues. I guess it would pay for itself at the end of the first meeting!2.  Calendar functionalityAs Alexa can be integrated with your Google calendar, you simply say, “Alexa, tell Quick Events to add go to see Tuesday 4th March at 11 a.m.". It prompts you until it has the complete scheduled entry.3. To do listsAlexa will add things to a To Do list. This could be an action list from a meeting or a personal list.4. CalculatorNeed numbers added, subtracted, multiplied, divided? You can read them in quickly and Alexa relpies quickly.5. Queries and questionsQuick questions or more detailed stuff from Wikipedia? Alexa will oblige. You can also get stock quotes and even do banking through Capital One. Expect others to follow.6. Domain specific knowledgeProduct knowledge, company specific knowledge, Alexa can be trained to respond to voice queries. Deliver a large range of text files and Alexa can find the relevant one on request.7. TrainingYou can provide text (text to speech) or your own audio briefings. Indeed, you can have as many of these as you want. Or go one step further with a quiz app that delivers audio training.8. MusicSet yourself up for the day or have some ambient music on while you work? Better still music that responds to your mood and requests, Alexa is your DJ on demand.9.  Order sandwiches, Pizza or UberAs Alexa is connected to several suppliers, you can get these delivered to your business door. Saves all of that running out for lunchtime sandwiches or pizza.10. Control office environmentYou can control your office environment through the Smart Home Skill API. This will work with existing smart home devices but there’s a developer’s kit so that you can develop your own. It can control lights, thermostats, security systems and so on.Conclusion As natural language AI applications progress, we will see these business uses become more responsive and sophisticated. This is likely to eat into that huge portion of management that the Harvard Business Review identified as admin. Beyond this are applications that deliver services, knowledge and training, specific to your organisation and you as an individual. Working on this as an application in training as we speak.[...]

AI moving towards the invisible interface


AI is the new UIWhat do the most popular online applications all have in common? They all use AI-driven interfaces. AI is the new UI. Google, Facebook, Twitter, Snapchat, Email, Amazon, Google Maps, Google Translate, Satnav, Alexa, Siri, Cortana, Netflix all use sophisticated AI to personalise in terms of filtering, relevance, convenience, time and place-sensitivity. They work because they tailor themselves to your needs. Few notice the invisible hand that makes them work, that makes them more appealing. In fact, they work because they are invisible. It is not the user interface that matters, it is the user experience.Yet, in online learning, AI UIs are rarely used. That’s a puzzle, as it is the one area of human endeavour that has the most to gain. As Black & William showed, feedback that is relevant, clear and precise, goes a long way in learning. Not so much a silver bullet as a series of well targeted rifle shots that keep the learner moving forward. When learning is sensitive to the learner’s needs in terms of pace, relevance and convenience, things progress.Learning demands attention and because our working memory is the narrow funnel through which we acquire knowledge and skills, the more frictionless the interface, the more efficient the speed and efficacy of learning. Why load the learner with the extra tasks of learning an interface, navigation and extraneous noise. We’ve seen steady progress beyond the QWERTY keyboard, designed to slow typing down to avoid mechanical jams, towards mice and touch screens. But it is with the leap into AI that interfaces are becoming truly invisble.TextlessVoice was the first breakthrough and voice recognition is only now reaching the level of reliability that allows it to be used in consumer computers, smartphones and devices in the home, like Amazon Echo and Google Home. We don’t have to learn how to speak and listen, those are skills we picked up effortlessly as young children. In a sense, we didn’t have to learn how to do these things at all, they came naturally. As bots develop the ability to engage in dialogue, they will be ever more useful in teaching and learning. AI also provides typing, fingerprint and face recognition. These can be used for personal identification, even assessment. Face recognition for ID, as well as thought diagnosis, is also advancing, as is eye movement and physical gesture recognition. Such techniques are commonly used in online services such as Google, Facebook, Snapchat and so on. But there are bigger prizes in the invisible interface game. So let's take a leapof the imagination and see where this may lead to over the next few decades.Frictionless interfacesMark Zuckerberg announced this year that he wants to get into mind interfaces, where you control computers and write straight from thought. This is an attempt to move beyond smartphones. The advantages are obvious in that you think fast, type slow. There’s already someone with a pea-sized implant that can type eight words a minute. Optical imaging (lasers) that read the brain are one possibility. There is an obvious problem here around privacy but Facebook claim to be focussing only on words chosen by the brain for speech i.e. things you were going to say anyway. This capability could also be used to control augmented and virtual reality, as well as comms to the internet of things. Underlying all of this is AI.In Sex, Lies and Brain Scans, by Sahakian and Gottwald, the advances in this area sound astonishing. John-Dylan Hayes (Max Plank Institute) can already predict intentions in the mind, with scans, to see whether the brain is about to add or subtract two numbers, or press a right or left button. Words can also be read, with Tom Mitchell (Carnegie Mellon) able to spot, from fMRI scans, nouns from a list of 60, 7 times out of 10. They moved on to train the model to predict words out[...]

Scrap your company values and replace with 'Don't be a dick!'... the rest is hubris


A brief conversation with a young woman, in the queue for lunch at a corporate ‘values’ day, opened my eyes up to the whole values thing in organisations. “I have my values,” she said, “and they’re not going to be changed by a HR department.... I’ll be leaving in a couple of years and no doubt their HR will have a different set of values… which I’ll also ignore”. Wisest thing I heard all day.You’ve probably had the ‘values’ treatment. Suddenly, parachuted out of HR, comes a few abstract nouns, or worse, an acronym, stating that the organisation now has some really important ‘values’. Even worse, an expensive external agency may have juiced them up. I genuinely like organisations that have a strategy, purpose, even a mission. But the current obsession with organisational values I don't buy.I also chaired a Skills Summit last month, where innumerable HR folk paraded their company values with the usual earnestness. An endless stream of abstract nouns, all of which seemed like things any normal human being would want in any context, in or out of work - you know the words - integrity, innovation, honesty, community....  After a full day of this stuff I was impressed by the guy who ran a small, successful software company, who stood at the podium, and claimed that his company didn't really have any stated values but felt that the whole 'values' thing could be replaced by one phrase 'Don't be a dick!". All company values can be substituted by this one phrase. The rest is hubris....   Bullshit BingoHaving dealt with hundreds of large organisations for more than 30 years, I have yet to find one whose values were anything more than platitudes. They are invariably a crude mixture of reactive PR, HR overreach and the crude selection from a list of abstract nouns, sometimes into an idiotic acronym. In reality - even when masked by complex consultancy reports and training - it's almost always bullshit Bingo.Why would we imagine that HR have any skills in this area? In what sense are they 'experts' in values? For me, it is a utopian view of work and organisations. I can remember the day when organisational 'value' lists never existed. People were more honest and realistic about expectations. They came in when HR suddenly decided that they had to look after our emotional and moral welfare - always a rather ridiculous idea.The banks were full of this 'values' culture. I worked with most of them. It was all puff and PR. People do not, and don't, buy into this stuff. They can barely recall what the values are. I have values and I'm not interested in what HR, or some external consultant, says my values should be. The even more ridiculous idea that people who don't adopt those values should be forced out is wrong and illegal.The problem here was a shift when HR started to become the people who protected the company against their own employees - that, for example, is what compliance training is largely about - ticking boxes in case of insurance and fines. They dress this up in ‘values’ documents but few remember them and even fewer care.... The really interesting thing about 'values' in my experience is that those companies who felt most compelled to get them identified - banks, accountancies, consultancies, tech companies, pharma companies etc. - were the very companies where they were most ignored. In fact, they were counterproductive as the employees all knew they were a scam, designed to 'police' them. Try this authenticity test to your company values. Sniff out the hubris and bullshit.Test 1: Bad acronyms - values created to fit wordIf your values set is an acronym, they’re likely to be inauthentic. The net result of fuzzy HR thinking is so often the ‘bad acronym’. Chances are that someone has shoehorned some abstract nouns into a word that sou[...]

Snapchat’s smart pivot into an AR company but is AR ready for learning?


Augmentation, in ‘augmented’ reality, comes in all shapes, layers and forms, from bulky headsets and glasses to smartphones. At present the market has been characterised by a mix of expensive solutions (Hololens), failures (Google Glass, Snap Spectacles) and spectacular successes (Pokemon Go, Snapchat Filters). So where is all of this going?SnapchatSnapChat has pivoted, cleverly, into being not just another messenger service, but the world’s largest Augmented Reality company. Its ‘filters’, that change every day, use face recognition (AI) and layered graphics to deliver some fun stuff and more importantly, advertising. It is a clever ploy, as it plays to the personal. You can use fun filters, create your own filter with a piece of dynamic art or buy one. It’s here that they’re building an advertising and corporate business on designed filters around events and products. That’s smart and explains why their valuation is stratospheric. Once you play around with Snapchat, you get why it’s such a big deal. As usual, it’s simple, useful, personal and compelling. With over 150 million users and advertising revenue model, that works on straight ads, sponsored filters and sponsored lenses (interactive filters), it has tapped into a market that simply never existed.Snap SpectaclesSnap Spectacles was their interesting foray into the glasses augmented market – but more of a gimmick than realistic consumer product. Targeted only at Snapchat users, you can’t really wear them with regular glasses and all they do is record video – but, to be fair, they do that well. However, as with Google Glass, you feel like a bit of a twat. Not really a big impact product.HololensWith its AI driven interfaces – point head, gesture or voice recognition, it is neat but at $3000 a pop – not really a commercial proposition for Microsoft. As for the ‘experience’, the limited rectangle, that is the field of view, is disappointing, and ‘killer’ applications absent. There have been games, Skype applications, 2D & 3D visualisations but nothing yet that really blows the mind – forget the idea of Sci-fi holograms, it’s still all a bit ‘Pepper’s Ghost’ in feel, still tethered and has a long way to go before being a viable product. LeapBit of a mystery still as they are a secretive lot. Despite having raised more than $1.4 billion from Google, Alibaba and Andreessen Horowitz, it has still to deliver whatever it is that they want to deliver. Mired in technical problems, they may still pull something out of the bag with their glasses release this year – but it seems you have to wear a belt with some kit attached. Watch this space, as they say, as it is nothing but an empty space for now.Pokemon GoWe saw the way this market was really going with Pokemon Go, layers of reality on a smartphone. Photographic Camera layer, idealised graphic map layer, graphic Pokemon layer, graphic Pokestops layer, GPS layer, internet layer – all layered on to one screen into a compelling social and games experience. Your mind simply brings them altogether into one conscious, beautiful, blended reality – more importantly, it was fun. This may be where augmented reality will remain until the minaturisation of headsets and glasses get down to the level of contact lenses.AR v VRI still prefer the full punch VR immersive experience. AR, in its current form is a halfway house experience. The headset and glasses seem like a stretch for all sorts of reasons. You simply have to ask yourself, do I need all of this cost and equipment, to see a solar system float in space, when I can see it in 3D on my computer screen? There are clearly many situations in which one would want to ‘layer’ on to reality but in many learning situations, there may be simple[...]

AI fail that will make you gag with disgust……


One of the first consumer robots were vacuum cleaners that bumped around your floors sucking up dirt and dust. The first models simply sensed when they hit something, a wall or piece of furniture, turned the wheels and headed off in a different direction.The latest vacuum robots actually map out your room and mathematically calculate the optimum route to evenly vacuum the floor. They have a memory, build a mathematical model of the room, with laser mapping, 360 degree cameras, can detect objects in real time and have clever corner cleaning capability.  They move room to room, can be operated from a mobile app - scheduling and so on. They will even automatically recharge when their batteries get low and resume from the point they left. Very impressive.That’s not to say they’re perfect. Take this example, that happened to a friend of mine. He has a pert dog and sure enough, the vacuum cleaner would bump into the dog on the carpet, turn and move on. The dog was initially puzzled, sniffed it a bit, but learned to ignore this new pet, as something beneath his contempt as top-dog. Cats even like to sit on them and take rides around the house.Then, one day, the owner came back, opened his front door and was hit by a horrific wall of smell.  The dog had taken a dump and the robot cleaner had smeared the shit evenly across the entire carpet, even into the corners, room by room, with a mathematical exactitude that was superior to that of any human cleaner. The smell was overwhelming and the clean up a Herculean task on hands and knees, accompanied by regular gagging. The lesson here is that AI is smart, can replace humans in all sorts of tasks but doesn’t have the checks and balances of normal human intelligence. In fact the attribution of the word intelligence, I'd argue (and have here), is an anthropomorphic category error, taking one category and applying it in a separate and compeltely different domain. It’s good at one thing, or a few things, such as moving, mapping and sucking, but it doesn’t know when the shit hits the fan.[...]

Do bears shit in the woods? 7 reasons why data analytics a misleading, myopic use of AI in HE


I’m increasingly convinced that HE is being pulled in the wrong with its obsession with data analytics, at the expense of more fruitful uses of AI in learning. Sure it has some efficacy but the money being spent at present, may be mostly wasted.1. Bears in woodsMuch of what is being paid for here is what I’d say was answers to the question, ‘Do bears shit in the woods?’ What insights are being uncovered here? That drop-out is being caused by poor teaching and poor student support? That students with English as a second language struggle? Ask yourself whether these insights really are insights or whether they’re something everyone knew in the first place.2. You call that data?The problem here is the paucity of data. Most Universities don’t even know how many students attend lectures (few record attendance), as they’re scared of the results. I can tell you that the actual data, when collected, paints a picture of catastrophic absence. That’s the first problem – poor data. Other data sources are similarly flawed, as there's little in the way of fine-grained feedback. It's small data sets, often messy, poorly structured and not understood.3. Easier waysMuch of this so-called use of AI is like going over top of head with your right hand to scratch your left ear. Complex algorithmic approaches are likely to be more expensive and far less reliable and verifiable than simple measures like using a spreadsheet or making what little data you have available, in a digestible form, to faculty.4. Better uses of resourcesThe problem with spending all of your money on diagnosis, especially when the diagnosis is an obvious limited set of possible causes, that were probably already known, is that the money is usually better spent on treatment. Look at improving student support, teaching and learning, not dodgy diagnosis.5. Action not analyticsIn practice, when those amazing insights come through, what do institutions actually do? Do they record lectures because students with English as a foreign language find some lecturers difficult and the psychology of learning screams at us to let students have repeated access to resources? Do they tackle the issue of poor teaching by specific lecturers? Do they question the use of lectures? (Easily the most important intervention, as the research shows)  is the shift to active learning. Do they increase response times on feedback to students? Do they drop the essay as a lazy and monolithic form of assessment? Or do they waffle on about improving the ‘student experience’ where nothing much changes?6. EvaluationI see a lot of presentations about why one should do data analytics  - mostly around preventing drop-out. I don’t see much in the way of verifiable analysis that data analytics has been the actual causal factor in preventing future drop-out. I mean a cost-effectiveness analysis. This is not easy but it would convince me,7.  Myopic view of AIAI is many things and a far better use of AI in HE, is, in my opinion, to improve teaching through personalised, adaptive learning, better feedback, student support, active learning, content creation and and assessment. All of these are available right now. They address the REAL problem – teaching and learning.ConclusionTo be fair I applaud efforts from the likes of JISC to offer a data locker, so that institutions can store, share and use bigger data sets. This solves some legal problems but looks at addressing the issue of small data. But this is, as yet, a wholly unproven approach. I work in AI in learning, have an AI learning company, invest in AI EdTech companies, am on the board of an AI learning company, speak on the subject all over the world, write constantly on the[...]

AI is the new UI: 7 ways AI shapes your online experience


HAL stands for ‘Heuristically programmed ALgorithmic computer’. Turns out that HAL has become a reality. Indeed we deal with thousands of useful HALs every time we go online. Whenever you are online, you are using AI. As the online revolution has accelerated, the often invisible application of AI and algorithms has crept into a vast range of our online activities. A brief history of algorithms includes the Sumerians, Euclid, the origins of the term (Al Khwarismi), Fibonacci, Leibniz, Gauss, Laplace, Boole and Bayes but in the 21st century ubiquitous computing and the internet has taken algorithms into the homes and minds of everyone who uses the web.You’re reading this from a network, using software, on a device, all of which rely fundamentally on algorithms and AI. The vast portion of the software iceberg that lies beneath the surface, doing its clever but invisible thing, the real building blocks of contemporary computing – are algorithms and AI. Whenever you search, get online recommendations, engage with social media, buy, do online banking, online dating, see online ads; algorithms are doing their devilishly clever work.BCG’s ten most innovative companies 2016Boston Consulting Group publish this list every year:AppleGoogleTeslaMicrosoftAmazonNetflixSamsungToyota FacebookIBMNote how it is dominated by companies that deliver access and services online. Note that all, apart perhaps from Toyota, are turning themselves into AI companies. Some, such as IBM, Google and Microsoft have been explicit on this strategy. Others, such as Apple, Samsung, Netflix and Facebook have been acquiring skills and have huge research resources in AI. Note also that Tesla, albeit a car company, is really an AI company. Their cars are always on, learning robots. We are seeing a shift in technology towards ubiquitous AI.1. SearchWe have all been immersed in AI since we first started using Google. Google is AI. Google exemplifies the success of AI in having created one of the most successful companies even on the back of AI. Beyond simple search, they also enable more specific AI-driven search through Google Scholar, Google Maps and other services. Whether it is documents, videos, images, audio or maps, search has become the ubiquitous mode of access. AI is the real enabler when it comes to access. Search Engine Indexing finds needles in the world’s biggest haystack. Search for something on the web and you’re ‘indexing’ billions of documents and images. Not a trivial task and it needs smart algorithms to do it at all, never mind in a tiny fraction of a second. PageRank was the technology that made Google one of the biggest companies in the world. Google has moved on, nevertheless, the multiple algorithms that rank results when you search are very smart. We all have, at our fingertips, the ability to research and find the things that only a tiny elite had access to only 20 years ago.2. RecommendationsAmazon has built the world’s largest retail company with a raw focus on the user experience, presented by their recommendation engine. Their AI platform, Alexa, now delivers a range of services but it was made famous by its recommendations on first books, now other goods. But recommendation engines are now everywhere on the web. You are more often than not presented with choices that are pre-selected, rather than the result of a search. Netflix is a good example, where the tiling is tailored to your needs. Most social media feeds are now AI-driven, as are many online services, where, what you (and others) do, determines what you see.3. CommunicationsSiri, VIV, Cortana, Alexa… voice recognition, enabled by advanced in AI through Natural Language Programming, has changed the way we communicate with technolog[...]

7 myths about AI


Big Blue beat Kasparov in 1997 but chess is thriving. AI remains our servant not master. Yet mention AI and people jump to the far end of the conceptual spectrum with big-picture, dystopian and usually exaggerated visions of humanoid robots, the singularity and the existential threat to our species. This is fuelled by cultural messages going back to the Greek Prometheus myth, propelled by Mary Shelly’s Frankenstein (subtitled The Modern Prometheus) to nearly a century of movies from Metropolis onwards that portray created intelligence as a threat. The truth is more prosaic.Work with people in AI and you’ll quickly be brought back from sci-fi visions to more practical matters. Most practical and even theoretical AI is what is called ‘weak’ or ‘narrow’ AI. It has no ‘cognitive’ ability. I blame IBM and the consultancies for this  hyperbole. There is no ‘consciousness’. IMBs Watson may have beaten the Jeopardy Champions, Google’s AlphaGO may have beaten the GO Champion – but neither knew they had won.The danger is that people over-promise and under-deliver, so that there's disappointment in the market. We need to keep a level head here and not see AI as the solution to everything. In fact, many problems need far simpler solutions.AI is the new UIAI is everywhere. You use it every day when you use Google, Amazon, social media, onine dating, Netflix, music streamming services, your mobile and any file  you create, store or open. Our online experineces are largely of AI driven services. It's just not that visible. AI is the new UI. However, there are several things we need to know about AI if we are to understand and use it well in our specific domain, and in this case it is teaching and learning.AI is ‘intelligent’AI is all about the brainAI is consciousAI is strongAI is generalAI is one thingAI doesn’t affect me 1. AI is not ‘intelligent’I have argued that the word ‘intelligent’ is misleading in AI. It pulls us toward a too anthropomorphic view of AI, suggesting that it is ‘intelligent’ in the sense of human intelligence. This, is a mistake as the word ‘intelligence is misleading. It is better to see AI in terms of general tasks and competences, not as being intrinsically intelligent, as that word is loaded with human preconceptions.‘intelligence’2. AI is not about the brainAI is coded and as such, most of it does not reflect what happens in the human brain. Even the so-called ‘neural network’ approach is loosely modelled on the networked structure of the brain. It is more analogy that replication. It’s a well work argument but we did not learn to fly by copying the flapping of birds’ wings and we didn’t learn to go faster by copying the legs of a cheetah – we invented the wheel. Similarly with AI.3. AI is not cognitiveIBMs marketing of AI as ‘cognitive technology’ is way off. Fine if they mean it can perform or mimic certain cognitive tasks but they go further, suggesting that it is in many senses ‘cognitive’. This is quite simply wrong. It has no consciousness, nor real general problem solving abilities, none of the many cognitive qualities of human minds. It is maths. This is nit necessarily a bad thing, as it is free from forgetting, racism, sexism, cognitive biases, doesn’t need to sleep, networks and doesn’t die. In other words AI is about doing things better than brains but  by other means.4. AI is weakThere is little to fear from threatening independence and autonomy in the short term.  Almost all AI is what is called ‘weak’ AI, programmes, run on computers that simulate what hum[...]

Elon Musk – the Bowie of Business


Having just finished Morley’s brilliant biography of Bowie, it struck me that Musk is the Bowie of business. Constantly reinventing himself; Paypal hero, Tesla road warrior, Solar City sungod, Starman with Space X and now the sci-fi Hyperloop hipster- and he’s still only in his forties. Strange fact this but the first Tesla car was codenamed DarkStar.But let’s not stretch Bowies leg warmers too far. Ashlee Vance’s biography of Elon Musk is magnificent for mostly other reasons. It’s about Musk the man, his psychology. There’s a manic intensity to Musk, but it’s directed, purposeful and, as Vance says, it’s not about making money. Time and time again he puts everything he’s made into the next, even weirder and riskier project. Neither is he a classic business guy or entrepreneur. For him questions come first and everything he does is about finding answers. He despises the waste of intellect that gets sucked into the law and finance, as he’s a child of the Enlightenment and sees as his destiny the need to accelerate progress. He doesn’t want to oil the wheels, he wants to drive, foot to the metal, the fastest electric car ever made then ride a rocket all the way to Mars. As he says, he wants to die there – just not on impact. Always on the edge of chaos, like a kite that does its best work when it stalls and falls but then it soars.Time and time again experience tells me, and I read, about actual leaders who bear no resemblance to the utopian model presented by the bandwagon ‘Leadership’ industry. The one exception is Stanford’s Pfeffer, who also sees the leadership industry as peddling unreal, utopian platitudes. Musk has a string of business successes behind him, including PayPal, and is the major shareholder in three massive, public companies, all of which are innovative, successful and global. He has taken on the aerospace, car and energy industries at breathtaking speed, with mind-blowing innovation. Yet he is known to be mercurial, cantankerous, eccentric, mean, capricious, demanding, blunt, delivers vicious barbs, swears like a trooper, takes things personally, lacks loyalty and has what Vance calls a ‘cruel stoicism’ –all of these terms taken from the book. He demands long hours and devotion to the cause and is cavalier in firing people. “Working at Tesla was like being Kurtz in Apocalypse Now”. So, for those acolytes of ‘Leadership’ and all the bullshit that goes with that domain, he breaks every damn rule – then again so do most of them – in fact that’s exactly why they succeed. They’re up against woozies who believe all that shit about leading from behind. So why are people loyal to him and why does he attract the best talent in the field? Well, he has vision. He also has a deep knowledge of technology, is obsessive about detail, takes rapid decisions, doesn’t like burdensome reports and bureaucracy, likes shortcuts and is a fanatic when it comes to keeping costs down. Two small asides – he likes people to turn up at the same time in the morning and hates acronyms. I like this. His employees are not playing pool or darts mid-morning and don’t lie around being mindful on brightly coloured bean bags. It’s relentless brainwork to solve problems against insane deadlines. You may disagree but he does think that it is only technology that will deliver us from climate change, the dependence on oil and allow us to inhabit planets other than our own and his businesses form a nexus of energy production, storage and utilisation that, he thinks, will save our species. He may be right. [...]

Corbyn is right - proud that my fat cat pay campaign against the CIPD paid off


Corbyn has come under attack for his wibbly-wobbly performance on top pay. Oh how some liberals I know, who pretend to care about the poor, mocked him on social media. But for me, he was on the money. To my mind, this needs urgent attention. First use the lever of public funding and contracts, second use ratios (around 20:1 with checks on range), along with caps.The blog is mightier than the swordNot a lot of people know this (to be said in Michael Caine accent) but in 2010 I was single-handedly responsible for slashing fat cat pay in a major institution, through blogging. It was the CIPD. I read the accounts and pointed out that the CEO salary was just short of £500k. Not bad for an organisation whose commercial revenue had plummeted (down 23%), research contracted and ridiculed (down 57% & report pulled), magazine imploded (down 83%), investment returns bombed (down 74.7%) and a membership who were angry and alienated about a command and control culture that left them with less services and starved of cash. There was also the issue of an odd and overpriced acquisition (Bridges Communications) and a blatant falsehood on her CV on the CIPD website. It seemed outrageous that the leading organisation in 'personnel', that often opines on exective pay, should be taking such liberties. You can read this in detail here.This caused a shitstorm. The CIPD Chair chipped in to defend the claim but it simply exposed the fact that the fat cat stuff had been going on for years, when it was shown that the previous CEO, Geoff Armstrong, had also eared a cool half a million. You can read the ludicrous defence here.What happened next was comical. Personnel Today picked up on my Jackie Orme story and laid out the case with a link to my blog along with an official response from the CIPD and got a survey going (see story here). Does CIPD CEO deserve £87,000 bonus? Result : NO 94%, YES 6%! Pretty conclusive and things changed very quickly. To cut a long story short, the current CEO of the CIPD, Peter Cheese, earned £250k last year. Result.This is one of the reasons I never ever joined the CIPD and never will. I have a healthy distruct of membership organisations that usually turn into not serving their emmbers but the staff of the institution itself. Needless to say, from that day the CIPD has never invited me to any event or conference or involved me in any project. It was worth it. Call them out - it can work as they hate the publicity. The moral of this story, is to use the power of the pen to attack these people personally and the Remuneration committees that support these extortionate salaries (and bonuses) (and other benefits). Believe me, it is extortion. I’ve been on these boards. It is literally extortion from the public purse.UniversitiesThe first target should be the Universities. The pay at the top has sprinted ahead of the pack. Last year the Russell group got an average 6% pay rise taking their average annual package to an average of £366,000. All of this on the back of the widespread and indefensible exploitation of part-time and low paid teaching staff. This is a disgrace. There’s also the issue of minimum wages right at the bottom. Don’t imagine for one minute that academe is in any sense a beacon of equality or morals. They’re rapacious at the top. Given that they receive huge amounts of public money, a large chunk through student fees, which is in effect government backed loans, which they are not responsible for collecting, we have an easy lever here. Get those ratios working or your funding gets questioned.CharitiesI’ve also been a Trustee on some very big educational charities. They pull every trick i[...]

The future of parenthood through AI – meet Aristotle Mattel's new parent bot


Parents obviously play an important role in bringing up and educating their children. But it’s fraught with difficulties and pitfalls. Parents tend to face this this, for the first time, without much preparation, and most would admit to botching at least some of it along the way. Many parents may work hard and don’t have as much time with their children as they’d like. Few escape from the inevitable conflicts over expectations, homework, diet, behavior and so on. So what role could AI play in all this?AI wedgeDomestic broadband was the first edge of the wedge. Smartphones, tablets and laptops were suddenly in the hands of our children, which they lapped up with a passion. Now with with the introduction of smart, voice activated devices into the home, a new proxy parent may have arrived, devices that listen understand and speak back, even perform tasks.Enter AristotleEnter Aristotle, Mattel’s $300 Aristotle assistant. They may have called it Aristotle as both his parents died when he was young, that he was the able teacher of Alexander the Great or, that Aristotle set the whole empirical, scientific tradition that led to AI going. To be honest, what’s far more likely, is that it sounds Greek, classical and authoritative. (Aristotle's view on education here).It’s a sort of Amazon Echo or Google Home for kids, designed for their bedrooms. To be fair, the baby alarm has been around for a long time, so tech has been playing this role in some fashion, for a some time, largely giving parents peace of mind. It is inevitable that such devices get smarter. By smart, I mean several things. First it uses voice, to both listen and respond. That’s good. I’ve noticed, in using Amazon Echo, how much I’ve had to speak carefully and precisely to get action (see my thoughts on Echo here). There may come a time when early language development, which we know is important in child development, could be enhanced by such AI companions. It may also encourage listening skills. Secondly, it may encourage and satisfy curiosity. These devices are endlessly patient. They don’t get tired, grumpy, are alert and awake 24/7 and will get very smart. Thirdly, they may enhance parenthood in ways we have yet to imagine.ChildOne aspect of the technology that does appeal is its personalized voice recognition. It knows the child’s voice. This could be useful. One area that could lessen embarrassment on both sides is timely sex education and advice. This could satisfy the child’s natural curiosity without the angst that child-parent communications could involve, as long as the child knows it is confidential and the parent is in control. As the child gets older, got a dispute over a fact? Amazon Echo or an Aristotle, may sort it out. Stuck with your homework, these devices will inevitably be able to help. There’s already an app, Photomaths, the app that students love and teachers hate, that you simply point at a mathematics problem, and it not only gives you the answer but all the steps in between. Few parents would be able to do this. Similarly with other subjects and languages. There’s no reason why the knowledge of the parent should limit the ability of a child to learn. The important thing is not to let such devices become substitutes for the horrific Tiger Mom experiences, hot-housing kids with endless exercises. Learning could be done in a measured fashion. And what parent wouldn’t want such a device to become an alarm, especially on school days?ParentThe Arostotle device is designed to allow you to track feeds, wet nappies and so on, even buy the necessaries. What could also be useful is the a[...]

First flirtations with Alexa (Amazon Echo)…. 10 interesting uses in learning


I’ve spent the last few years working with AI, with WildFire and CogBooks, but here it is, in my home folks, on my kitchen table. First impressions? Looks good, a Space Odyssey-like column in black and my first encounter is quietly weird – a moving circle of blue light, a HAL-like halo appears, then a female voice. I say 'Hello', she says 'Hi'. I’m non-plussed by the gender voice farrago – a sort of ‘damned of you don’t, damned if you do’ argument. What I’m really interested in, is whether it is of any use. So a few ‘get to know you’ questions seemed appropriate.Where are you?I’m here and my head is in the cloud.Where do you come from?I am designed by Amazon. They’re based in Seattle, Washinton.Are you a man or a woman?I’m female in character.Volume was a little low so… Turn up volume. She did!How tall are you?I’m just over 232 millimeters.How wide are you?My diameter is 7.6 centimeters.What weight are you?I’m just over 1 kilogram.What do you like?I like to read.How do you feel today?I’m very well thanks.Ok, let’s up the ante – everybody does this and I have to say, it felt a little  transgressive… I sware at her....That’s not very nice to say.Ok, tell me a jokeWhat does a house wear? A dress.Several fairly anodyne jokes later….OK, enough of the small talk…First up… let’s not to compare Alexa to a human. It’s all too easy to do the ‘but she can’t do this… or that…’ thing. I’m not looking for a life companion, or a friend – I want to see if she’s useful. This is the first time I’ve used voice recognition in anger, woven into my life, so I’m keen to focus, not on problems but potential. So far, the voice recognition is damn good. I have a strong accent, that doesn’t throw her, and variations on the phrasing of questions seem to work (not always). There's a real problem with near-sounding homophones, but you learn to be more precise in your pronunciation. Next line of enquiry, ‘time’.TimeYou can ask it the time or date, even holiday dates, number of days until a holiday and so on.  The sort of practical stuff we all need.What time is it?Bang on.What date is it?Day of the week and date.When is Burn’s Night?Burns Night will be on Wednedsay 25 January 2017.How many days to Burn’s Night?There are 19 days until Burn’s Night.The timer functions are also neat, as these are often annoyingly fiddly on your cooker or alarm clock. How often do you pop something in the oven and either ‘look to check’ or suddenly smell the charred remains? Set a timer for 10 minutesSet a second timer for 20 minutesHow much time is left on my timer?Then there are the alarm functions.Set alarm for 7.30 tomorrow morning.All good, just ask, it confirms the time – done. Beyond this, she integrates with Google Calendar, reminding you of what you have to do today, tomorrow…To do listsTo do lists are neat. I use a small notebook but for household stuff, a shopping list or to do list in the kitchen is neat. We can all add to the list. My gut feel, however, is that this will go the way of the chalkboard – unloved and unused.OK, let’s pause, as future uses are starting to emerge….Use 1 – Work and Personal AssistantOnly a start but I can already see this being used in organisations, sitting on the meeting room table, with alarms set for 30 mins, 45 mins and five mins, in an hour long meeting. Once fully developed, it could be an ideal resource in meetings for company information – financial and otherwise.In fact, it struck me just playing around with these functions, that Alexa, as it evolves, [...]

AI isn’t a prediction, it’s happening. How fast - what do the experts say?


AI predictions have been notoriously poor in the past. Speculation that machines will transcend man has been the subject of speculative thought for millennia. We can go all the way back to the Prometheus Bound, y Aeschylus, where the God Prometheus is shackled to a rock, his liver eaten by an eagle for eternity. His crime? To have given man fire, and knowledge of writing, mathematics, astronomy, agriculture, and medicine. The Greeks saw this as a topic of huge tragic import – the idea that we had the knowledge and tools to challenge the Gods. It was a theme that was to recur, especially in the Romantic period, with Goethe, Percy Blyth Shelly and Mary Shelley, who called her book Frankenstein ‘The Modern Prometheus’. Remember that Frankenstein is not the created monster but his creator.Prometheus myth, in one of the oldest Greek tragedies we have, In many ways the rate of prediction is still largely in this romantic tradition – one that values idle thought  and commentary over reason.There is something fascinating about prediction in AI, as that is what the field purports to do – the predictive analytics embedded in consumer services, such as Google, Amazon and Netflix have been around for a long time. Their invisible hand has been guiding your behaviour, almost without notice. So what doe the field of AI have to say about itself? Putting aside Kurweil’s (2005) singularity, as being too crude and singularly utopian, there have been some significant surveys of experts in the field.Survey 1: 1973One of the earliest surveys was with 67 AI experts in 1973, by the famous AI researcher Donald Mitchie. He asked how many years it would be until we would see “computing exhibiting intelligence at adult human level”.As you can see there was no great rush towards hype at that time. Indeed, the rise in 1993 turned out to be reasonable, as by that time there had been significant advances and 2023 and beyond, seems not too unreasonable, even now.Survey 2: 2006Jumping to Moor (2006), a survey was taken at Dartmouth College, on the 50th anniversary of the famous AI conference organised by John McCarthy in 1956, where the modern age of AI started:Again, we see a considerable level of scepticism, including substantial numbers of complete sceptics who answered 'Never' to these questions.Survey 3: 2011Baum et al. (2011) took the Turing Test as their benchmark, and surveyed on the ability to pass Turing tests with the following results: 50% probability:   Third grade - 2030  Turing test - 2040  Nobel research – 2045Given the fact that the Todai project got a range of AI techniques to pass the Tokyo University entrance exam, this may seem like an underestimate of progress.Survey 4: 2014The most recent, serious attempt, where all of the above data was summarised of you want more detail, was by Muller and Bostrum (2014). Their conclusion, based on surveying four groups of experts, was:“The median estimate of respondents was for a one in two chance that high- level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.”So still some way off, 50:50 on around 30 years and getting more certain for 50 years hence.A particularly interesting set of predictions in this survey was around the areas of probable advances:Cognitive science 47.9% Integrated cognitive arch[...]

2016 was the smartest year ever


For me, 2016 was the year of AI. It went from an esoteric subject to a topic you’d discuss down the pub. In lectures on AI in learning around the world in New Zealand, Australia, Canada, US, UK and around Europe, I could see that this was THE zeitgeist topic of the year. More than this, things kept happening that made it very real….1. AI predicts a Trump winOne particular instance was a terrifying epiphany. I was teaching on AI at the University of Philadelphia, on the morning of the presidential election, and showed AI predictions which pointed to a Trump win. Oh how they laughed – but the next morning confirmed my view that old media and pollsters were stuck in an Alexander Graham Bell world of telephone polls, while the world had leapt forward to data gathering from social media and other sources. They were wrong because they don’t understand technology. It’s their own fault, as they have an in-built distaste for new technology, as it’s seen as a threat. At a deeper level, Trump won because of technology. The deep cause ‘technology replacing jobs’ has already come to pass. It was always thus. Agriculture was mechanised and we moved into factories, factories were automated and we moved into offices. Offices are now being mechanised and we’ve nowhere to go. AI will be the primary political, economic and moral issue for the next 50 years.2. AI predicts a Brexit winOn the same basis, using social media data predictions, I predicted a Brexit win. The difference here, was that I voted for Brexit. I had a long list of reasons - democratic, fiscal, economic and moral – but above all, it had become obvious that the media and traditional, elitist commentators had lost touch with both the issues and data. A bit surprised at the abuse I received, online and face-to-face, but the underlying cause, technology replacing meaningful jobs has come to pass in the UK also. We can go forward in death embrace with the EU or create our own future. I chose the latter.3. Tesla timesI sat in my mate Paul Macilveney’s Tesla (he has one of only two in Northern Ireland), while it accelerated (silently) pushing my head back into the passenger seat. It was a gas without the gas. On the dashboard display I could see two cars ahead and vehicles all around the car, even though they were invisible to the driver. Near the end of the year we saw a Tesla car predict an accident between two other unseen cars, before it happened. But it was when Paul took his hands off the steering wheel, as we cambered round the corner of a narrow road in Donegal, that the future came into focus. In 2016, self-driving cars became real, and inevitable. The car is now a robot in which one travels. It has agency. More than this, it learns. It learns your roads, routes and preferences. It is also connected to the internet and that learning, the mapping of roads, is shared with all as yet unborn cars.4. AI on tapAs the tech giants motored ahead with innumerable acquisitions and the development of major AI initiatives, some even redefining themselves as AI companies (IBM and Google), it was suddenly possible to use their APIs to do useful things. AI became a commodity or utility – on tap. That proved useful, very useful in starting a business.5. OpenAIHowever, as an antidote, to the danger that the tech monsters will be masters of the AI universe, Elon Musk started OpenAI. This is already proving to be a useful open source resource for developers. Its ‘Universe’ is a collection of test environments in which you can run your algorithms. This i[...]

Brains - 10 deep flaws and why AI may be the fix


Every card has a number on one side and a letter on the other.If a card has a D then it has a 3 on the other side.What is smallest number of cards you have to turn over to verify whether the rule holds?  D      F       3      7(Answer at end)Most people get this wrong, due to a cognitive weakness we have - confirmation bias. We look for examples that confirm our beliefs, whereas we should look examples that disconfirm our beliefs. This, along with many other biases, is well documented by Kahneman in Thinking: Fast and Slow. Our tragedy as a species is that our cognitive apparatus and, especially our brains, have evolved for purposes different from their contemporary needs. This makes things very difficult for teachers and trainers. Our role in life is to improve the performance of that one organ, yet it remains stubbornly resistant to learning.1. Brains need 20 years of parenting and schoolingIt takes around 16 years of intensive and nurturing parenting to turn them into adults who can function autonomously. Years of parenting, at times fraught with conflict, while the teenage brain, as brilliantly observed by Judith Harris,  gets obsessed with peer groups. This nurturing needs to be supplemented by around 13 years of sitting in classrooms being taught by other brains - a process that is painful for all involved – pupils, parents and teachers. Increasingly this is followed by several years in college or University, to prepare the brain for an increasingly complex world.2. Brains are inattentiveYou don't have to be a teacher or parent for long to realise how inattentive and easily distracted brains can be. Attention is a necessary condition for learning, yet they are so easily distracted.3. Fallible memoriesOur memories are not only limited by the narrow channel that is working memory but the massive failure to shunt what we learn from working to long-term memory. And even when memories get into long-term memory, they are subject to further forgetting, even reconfiguration into false memories. Every recalled memory is an act of recreation and reconstitution, and therefore fallible. Without reinforcement we retain and recall very little. This makes them very difficult to teach.4. Brains are biasedThe brain is inherently biased, not only sexist and racist, it has dozens of cognitive biases, such as groupthink, confirmation bias and many other types of dangerous biases, that shape and limit thought. More than this it has severe weaknesses, not only inherent tendencies, such as motion sickness, overeating, jet-lag, phobias, social anxieties, violent tendencies, addiction, delusions and psychosis. This is not an organ that is inherently stable.5. Brains need sleepOur brains sleep eight hours a day, that’s one third of life gone, down the drain. Cut back on this and we learn less, get more stressed, even ill. Keep the brain awake, as torturers will attest, will drive it to madness. Even when awake, they are inattentive and prone to daydreaming. This is not an organ that takes easily to being on task.6. Brains can’t upload and downloadBrains can’t upload and download. You cannot pass your knowledge and skills to me without a huge amount of motivated teaching and learning. AI can do this in an instant.7. Brains can't network.Our attempts at collective learning are still clumsy, yet AI, collective learning and intelligence is a feature of modern AI.8. Brains can't m[...]

Bot teacher that impressed and fooled everyone


An ever-present problem in teaching, especially online, is the very many queries and questions from students. In the Georgia Tech online course this was up to 10,000 per semester from a class of 350 students (300 online, 50 on campus). It’s hard to get your head round that number but Ashok Goel, the course leader, estimates that it is one year's work for a full time teacher.The good news is that Ashok Goel is an AI guy and saw his own subject as a possible solution to this problem. If he could get a bot to handle the predictable, commonplace questions, his teaching assistants could focus on the more interesting, creative and critical questions. This is an interesting development as it brings tech back to the Socratic, conversational, dialogue model that most see as lying at the heart of teaching.Jill Watson – fortunate errorHow does it work? It all started with a mistake. Her name, Jill Watson, came from the mistaken belief that Tom Watson’s (IBM’s legendary CEO) wife was called Jill - her name was actually Jeanette. Four semesters of query data, 40,000 questions and answers, and other chat data, were uploaded and, using Bluemix (IBM’s app development environment for Watson and other software), Jill was ready to be trained. Initial efforts produced answers that were wrong, even bizarre, but with lots of training and agile software development, Jill got a lot better and was launched upon her unsuspecting students in the spring semester 2016.Bot solutionJill solved a serious problem – workload. But the problem is not just scale. Students ask the same questions over and over again, but in may different forms, so you need to deal with lots of variation in natural language. This lies at the heart of the chatbot' solution -  more natural, flowing, frictionless, Socratic form of dialogue with students. The database therefore, had many species of questions, categorized, and as a new question came in Jill was trained to categorise the new question and find an answer.With such systems it sometimes gets it right, sometimes wrong. So a mirror forum was used, moderated by a human tutor. Rather than relying on memory, they added context and structure, and performance jumped to 97%. At that point they decided to remove the Mirror Forum. Interestingly, they had to put a time delay in to avoid Jill seeming too good. In opractive academics are rather slow at responding to student queries, so thay had to replicate bad performance. Interesting, that in comparing automated with human performance, it wasn;t a metter of living up to expectations but dumbing down to the human level.These were questions about coding, timetables, file format, data usage, the sort of questions that have definite answers. Note that she has not replaced the whole teaching tasks, only made teaching and learning more efficient, scalable and cheaper. This is likely to be the primary use of chatbots in the short to medium term - tutor and learner support. That’s admirable.Student reactionsThe students admitted they couldn’t tell, even in classes run after Jill Watson’s cover was blown – it’s that good. What’s more, they like it, because they know it delivers better information, often better expressed and (importantly) faster than human tutors. Despite the name, and an undiscovered run of 3 months, the original class never twigged. Turing test passed.In Berlin this month, I chaired Tarek Richard Besold, of the University of Bremen, who gave a fascinating talk through some of[...]

Emotional Intelligence - another fraudulent fad


The L and D hammer is always in search of nails to slam into the heads of employees. So imagine their joy, in 1995, when ‘Emotional Intelligence’ hit HR on the back of Daniel Goleman’s book ‘Emotional Intelligence’. (The term actual goes back to a 1964 paper by Michael Beldoch and has more than a passing reference to Gardner’s Multiple Intelligences.) Suddenly, a new set of skills could be used to deliver another batch of ill-conceived courses, built on the back of no research whatsoever. But who needs research when you have a snappy course title?EI and performance At last we have some good research on the subject which shows that the basic concept is flawed, that having EI is less of an advantage than you think. Joseph et al. (2015) published a meta-analysis of 15 carefully selected studies, easily the best summary of the evidence so far. What they found was a weak correlation (0.29) with job performance. Note that 0.4 is often taken as a reasonable benchmark for evidence of a strong correlation. To put this into plain English, it means that EI has a predictive power on performance of only 8.4%.  Put another way, if you’re spending a lot of training effort and dollars on this, it’s largely wasted. The clever thing about the Joseph paper was their careful focus on actual job performance, as opposed to academic tests and assessments. They cut out the crap, giving it real evidential punch.EI is a bait and switchWhat became obvious as they looked at the training and tools, is that there was a bait and switch going on. EI was not a thing-in-itself but an amalgam of other things, especially good-old personality measures. When they unpacked six EI tests, they found that many of the measures were actually personality measures, such as conscientiousness, industriousness, self-control. These had been stolen from other personality tests. So, they did a clever thing and ran the analysis again, this time with controls for established, personality measures. This is where things got really interesting. The correlation between EI and job performance dropped to a shocking -0.2.Weasel word ‘emotional’Like many fads in HR, an intuitive error lies at the heart of the fad. It just seems intuitively true that people with emotional sensibility should be better performers but a moment’s thought and you realize that many forms of performance may rely on many other cognitive traits and competences. In our therapeutic age, it is all too easy to attribute positive qualities to the word ‘emotional’ without really examining what that means in practice. HR is a people profession, people who care. But when they bring their biases to bear on performance, as with many other fads, such as learning styles, Maslow, Myers-Briggs, NLP and mindfulness, emotion tends to trump reason. When it is examined in detail EI, like these other fads, fall apart. Weasel word ‘intelligence’I have written extensively about the danger in using the word ‘intelligence’, for example, in artificial intelligence. The danger with ‘emotional intelligence’ is that a dodgy adjective pushes forward an even dodgier noun. Give emotion the status of ‘intelligence’ and you give it a false sense of its own importance. Is it a fixed trait, stable over time, can it be taught and learned? Eysenck, the doyen of intelligence theorists, dismissed Goleman’s definition of ‘intelligence’ and thought his claims were unsubstantiated. In truth the use of [...]

Top 20 myths in education and training


Let's keep this simple.... click on each title to get full critique...1. LeadershipWe’re all leaders now, rendering the word meaningless.2. Emotional IntelligenceDodgy adjective pushing an even dodgier noun.3. Learning stylesAn intuition gone bad.4. Myers-BriggsShoddy and unprofessional judgements on others.5. Learning objectivesBore learners senseless at the start of every course. 6. MicrolearningAnother unnecessary concept.7. Maslow's hierarchyColoured slide looks good in Powerpoint.8. DiversityInconvenient truths (research) show it's wrong-headed.9. NLPNo Longer Plausible: training’s shameful, fraudulent cult?10. MindfulnessYet another mindless fad.11. Social constructivismInefficient, inhibiting and harmful fiction.12. 21st C skillsThey're so last century!13. ValuesDon't be a dick! - the rest is hubris. 14. PiagetGot nothing right and poor scientist.15. KirkpatrickOld theory, no longer relevant.16. Mitra's Hole-in-the-wallNot what it seems!17. Mitra's SOLE10 reasons why it is wrong.18. Ken RobinsonCreative with the truth.19. GamificationPlaying Pavlov with learners?20. Data AnalyticsDo bears shit in the woods?[...]

Why AI needs to drop the word ‘intelligence’


The Turing test arrived in the brilliant Computing Machinery & Intelligence(1950), along with its nine defences, still an astounding paper that sets the bar on whether machines can think and be intelligent. But it’s unfortunate that the title includes the word ‘intelligence’ , as it is never mentioned in this sense in the paper itself. It is also unfortunate that the phrase AI (Artificial Intelligence) invented by John McCarthy in 1956 (the year of my birth), at Dartmouth (where I studied), has become a misleading distraction.Binet, who was responsible for inventing the IQ (intelligence quotient) test, warned against it being seen as a sound measure for individual intelligence or that it should be seen as ‘fixed’. His warnings were not heeded as education itself became fixated with the search and definition of a single measure of intelligence – IQ. The main protagonist being Eysenck and it led to fraudulent policies, such as the 11+ in the UK, promoted on the back of fraudulent research by Cyril Burt. Stephen Jay Gould’s 1981 book The Mismeasure of Man is only one of many that have criticised IQ research as narrow, subject to reification (turns abstract concepts into concrete realities) and linear ranking, when cognition is, in fact, a complex phenomenon. IQ research has also been criticised for repeatedly confusing correlation with cause, not only in heritability, where it is difficult to untangle nature from nurture, but also when comparing scores in tests with future achievement. Class, culture and gender may also play a role and the tests are not adjusted for these variables. The focus on IQ, a search for a single, unitary measure of the mind, is now seen by many as narrow and misleading. Most modern theories of mind have moved on to more sophisticated views of the mind as with different but interrelated cognitive abilities. Gardener tried to widen its definition into Multiple Intelligences (1983) but this is weak science and lacks any real vigour. It still suffers from a form of academic essentialism. More importantly, it distorts the filed of what is known as Artificial Intelligence.Drop word ‘intelligence’We would do well to abandon the word ‘intelligence’, as it carries with it so much bad theory and practice. Indeed AI has, in my view, already transcended the term, as it gained competences across a much wider sets of competences (previously intelligences), such as perception, translation, search, natural language processing, speech, sentiment analysis, memory, retrieval and other many other domains.Machine learningTuring interestingly anticipated machine learning in AI, seeing the computer as something that could be taught like a child. This complicated the use of the word ‘intelligence’ further, as machines in this sense operate dynamically in their environments, growing and gaining in competence. Machine learning has led to successes all sorts of domains beyond the traditional field of IQ and human ‘intelligences’. In many ways it is showing us the way, going back to a wider set of  competences that includes both ‘knowing that’ (cogntitive) and ‘knowing how’ (robotics) to do things. This was seen by Turing as a real possibility and it frees us from the fixed notion of intelligence that got so locked down into human genetics and capabilities.Human-all-too-humanOther formulations of capabilities may be found if we do [...]