It’s no secret that sales of activity tracking apps and wearables have boomed, with The Telegraph reporting that over three million fitness trackers are flying off the shelves in the UK each year. From monitoring our fitness and sleep and even our mental reflexes, self-improvement is officially the name of the game.
86% of higher education (HE) students think an activity tracking app for learning and teaching would be helpful, finds our recent survey. Further findings show that 78% of HE students would be happy to have their learning data collected if it improved their grades, and more than half would be happy to have their learning data collected if it stopped them from dropping out (61%).
These findings come as no surprise, as the survey also found that 98% of HE students think that technology is becoming increasingly important in education. A further 76% of HE students surveyed who think technology is becoming increasingly important – think so because it makes life more efficient. It seems clear that students would whole heartedly welcome this self-improvement movement, along with the tech that they use in their everyday lives, into the education sector.
Enter learning analytics. This year, we will be releasing a learning analytics student app, so that students will be able to see how their learning activity compares with others and set targets to do better in their courses. This will not only benefit students, but staff members too, who will be able to view a dashboard showing the learner engagement and attainment of their students, allowing them to better target students who might be struggling with the course, and prevent drop-outs too. The app will also help staff members to better understand how to make learning more effective.
Speaking to Times Higher earlier this year, Ian Fordham, Microsoft UK’s new director of education, said higher education institutions are in a “hybrid” state of adapting technology into their academic offer:
"I think the learning analytics movement in HE is going to become much more significant, tracking students on their learning journey – for example, the amount of money that universities waste on lost students in terms of that journey.
Embedding learning analytics within a university’s tech-enhanced learning environment brings many advantages, including having a single version of the truth where universities have clear data from which to base informed decisions and create intervention plans early to improve an outcome.”
"It’s brilliant to see that students are as inspired about the creation of an app to improve their learning experience as we are. The app has the potential to help students take control of their learning progress as well as enabling university staff to continually improve the experience they offer students. With such apps becoming every day in other sectors and industries, it’s time that education reaped the benefits of such technology too.
At Jisc we believe that digital has the power to transform and revolutionise education, and our work with learning analytics is an important step in the right direction"
Data scientist and Networkshop keynote speaker Miranda Mowbray explains how finding patterns in large data sets may offer a huge step forward in tackling network attacks. In this interview she also considers the particular security challenges posed by the Internet of Things, the ethical issues around big data analysis, and argues that we should not blame users for their poor password choices.
You use big data to find attacks on computer networks. How? And how does it improve on how people were doing it before?
In general, the traditional way of finding attacks on computer networks is to identify the “signature”. So, for a particular attack, you know that there is a particular sequence of bytes that’s a signature of that malware, or the signature is a little more behavioural, for instance that a certain number of bytes are sent to a particular port. Or the malware uses a particular domain to communicate between the infected machine and the malware.
But this is a rather fragile way of detecting because if the malware designer manages to change the signature and slightly upgrade the attack, then it suddenly becomes invisible. Malware designers have found ingenious ways to design malware so that they give different signatures every time - the malware itself mutates.
However, if you collect lots of data from the network and use data analysis on it, it’s sometimes possible to spot patterns. So, for example, although the domain will be different every time, you see patterns in the set of domains that are picked by the random generator within the malware. And if you can see this happening in ways that are consistent with the attack you can do larger-scale pattern spotting and use that to detect malware that mutates.
One way that we did this was to detect domain fluxing algorithms. This is a technique in malware where the domain that is used to connect between the malware controller and the infected machine is different each time. But there are patterns in the features of the domains that they try to connect to. We weren’t the first people to use data science to find these patterns, but we designed a detection method that, from five days' data from five different months in a large enterprise network, uncovered 19 families, nine of which we hadn't seen before. The previous record was in a paper that reported six previously unknown families detected in 18 months' data, and most previous research papers on this topic reported just one new family.
Are there any downsides with using data science for the detection of network security issues?
Yes. There are several big ones. The first is false positives. We’re looking at billions of events per day and if you have one chance in a thousand of getting a false positive, then that's millions of false positives. That’s not good enough – the number of false positives has to be kept low. There are various ways in which you can do this. Generally, you do it as a trade off between false positives and false negatives in that you’re prepared to miss things, but there are also other techniques. For example, rather than immediately ban something from your network, you may quarantine it so that the stream connected with it cannot explore the full power of your network, like get into more sensitive databases, for example.
Another thing you can do is delay and collect more information. So you say, “this looks suspicious, we’re going to delay its behaviour, observe it some more and see whether we can be more sure whether it’s actually bad news or not”. That may result in a slight delay but, when we tried it, users didn’t actually notice anything at all.
Another option, and the point at which users might become aware, is if you decide to set up automatic send-outs of notifications to users saying “we’ve noticed something suspicious and here’s what you have to do”.
An inherent problem with cybersecurity from a data science perspective is that, unlike other areas of data science, there is an adversary. You’re not trying to find general patterns in nature or the universe or a business, there is someone who is actively trying to fool you – and that makes it more challenging and more interesting. You have to make the detective methods you develop expensive for your attackers to get around, or slow them down. You have to think always about how they might be circumvented. That’s pretty cool and fun.
Another issue is that because you’re looking at a very large amount of potentially sensitive data, keeping this data private and secure is really important and that’s less of factor in some other areas of data science where the data is public and not particularly sensitive.
A further issue is the frequency of the true positives. Supposing one in a million of the billion events you look at is associated with an attack, that’s a thousand a day. Ringing an alarm bell for each one of those will not go down well with your security event team. So you have to collate the data and show it in a way that’s more helpful and easy to manage for human beings. If you can work out if there are things happening that are all associated with the same attack, you can report them together, ringing just one alarm bell.
What added issues does the Internet of Things bring?
In one sense, the Internet of Things is no different from what we had before – it’s hardware plus software plus networking plus applications, and a security problem in any one of those is a security problem for the whole thing. But what’s unusual about the Internet of Things is that this is a new industry with a lot of new companies whose main focus is not going bust. They have a limited amount of time and venture capital to bring to market a product that actually functions and they are concentrating on that. Anything that will delay their time to market or increase their cost per device is going to be treated as an area where they may have to make cuts – and one of these areas is security and privacy. The issue with privacy is that it may be that, for the business model, the whole point of the device is to collect as much data as possible.
There’s also an issue that this is mainly small companies where the manufacturing chain may be very long and complex. It may involve teams from small startups in seven different parts of the world, where none of the teams have a security expert. And interactions between what any of these teams do might cause a security problem.
Another issue is where an object is designed for offline use and then put on the network. It may be very securely designed and be fine as long as it’s offline but, as soon as you hook it up, all sorts of new problems emerge. For example, there are doll manufacturers that are internet-enabling their dolls and there have been vulnerabilities discovered in these. They may be designed to be safe and work well as offline dolls but once you put them on the network there’s an issue.
How can we move forward in making people actually care about security? Is it down to design processes, business models or user mindsets? (Or all three…)
It’s all three. For example, with the Internet of Things, as well as the design of the technology, one issue that can easily be addressed is that some Internet of Things start ups don’t have good processes in place. They want to do the right thing but, for example, they do not have good procedures for responding to a vulnerability report.
There was an investigation into the security of baby monitors, which found difficulties, which is not really surprising, but what did surprise me was the very inadequate response of some of the companies to the vulnerability report. They didn’t seem to have any processes in place for dealing with it. There is an exception – Philips was exemplary, which shows that this can be done.
As for mindsets, there’s a terrific paper by Anne Adams and Angela Sasse at the University of London called Users Are Not the Enemy and the idea is “don’t blame the user”. When people talk about changing mindsets, sometimes what they mean is “those blasted users, they behave insecurely”. Adams and Sasse looked at some examples where this was being said and found that it was set up so that it was almost impossible for the users to behave securely!
I did some research with a different team on what makes people more likely to share online. We found that defaults are a very important driver. If you make things easy for people to do, then it’s more likely they will do them, particularly if you can make it the default. There are some examples in the Internet of Things where the default is insecure and the users have to do something to make it secure. That should not be the case. If a customer insists on using "password" as a password, there should be something that says, "we don’t allow that".
Generally, before you do user education, you should do everything else first. If you have a problem with users, it’s probably evidence that your design and your architecture is not good enough. Once you’ve improved that, then you can do user education.
What would you say is the balance between attack and defence? Who is winning?
I don’t think it’s a case of “winning”. I see the network ecosystem like a biological ecosystem. Most people who use any kind of network are good, but there are a few people who are out to exploit that: they're parasites. But it’s not in the interests of the parasites to kill off the host, to kill off the ecosystem.
We are always going to have people designing malware in order to make money, it’s likely that they will continue to be able to make some money but I don’t think that they are going to be able to close the whole thing down. I don’t think anyone can ever win, but I don’t think we’ll lose.
How might we go about understanding the 'thought process' of a machine learning algorithm?
There is an issue with some types of machine learning in that they are very opaque. So, for example, I could tell you all the features that are used in one of my algorithms and the weights that occur from those features. But the weights depend on the recent data that’s come in, so tomorrow it may be different. I can give you a full description of the algorithm, I can say everything it does but that’s not really explaining the thought processes. You don’t necessarily have an insight into what it’ll do tomorrow. And it may surprise you!
However, there is a lot of work being done in making machine learning less opaque, and I do think there is scope for it. For example, one thing you can do is find the top five features that are most salient in a classification algorithm. If I'm classifying something as malware, or not, or infected or not, I can tell you the features and their relative weights, and I can say how these have changed over time. I can also give you typical instances of where it has classified something as malware or not, or something that it wasn't quite sure about and it plumped for not-malware, and these were the weights that motivated that decision.
There's also some very nice work by Cynthia Rudin and her team on interpretable models: if you're learning from training data with 100 features, instead of looking for a model that uses the values of all 100 features, you can look for the best predictive model that only uses a small number of the features, and which evaluates an input just with a simple scoring system for these features. They've shown that for many applications you can find simple, easily explainable models of this kind that are just as accurate as the ones found by more opaque processes.
In the particular case of deep learning, which tends to be completely opaque, there is work being done to make it less so. Deep learning produces different abstraction layers for data, it finds features of a data set and features of those features and features of those features of those features. You can find out what high level features it's discovered, and that may give you some sort of insight into what it's doing.
There’s an international shortage of data scientists. How do we fill that gap - starting early with schools and STEM subjects? Encouraging more women into the field? Or will automation fill the gap for us?
Automation will be part of the solution but it is not enough. We can automate to a certain extent, and it's right that we do that, but there will be a continuing need for data scientists.
To do data science you need the technical bit, you need the hacking bit, but you also need the domain knowledge and so it really is an art as well as a science.
The appeal of the field has changed quite a lot in just the last few years because of the amount of publicity around how much money data scientists can make. One result of that is a bunch of people who do not have the maths background but are good at the hacking are getting into data science and that's not necessarily the ideal route, because it's easy to do it wrong. Data science involves a collection of skills that, from an educational point of view, we're not really educating people to have together.
As well as the analytical, statistical and hacking skills, there are domain skills, so a data scientist in agriculture might be rather different than a data scientist in business. But an absolutely crucial requirement is the ability to communicate your results clearly in a way that a layperson can understand but does not traduce the science. And that is something that, traditionally, computer scientists aren't given much coaching on.
What has your research told you about ethical issues in big data analysis (particularly in an educational context)?
My work has been in the corporate context but one thing that I've been impressed by is the seriousness with which universities take ethical issues around the guardianship of big data and how they do that better than the commercial world, in my opinion. So when I was looking at codes of practice I generally found better ones from academic institutions or scientific bodies than from the commercial world. Having an ethics board is absolutely normal in universities, it's rarer in industry.
I was doing research on more than one large network so, in theory, I had access to very large amounts of network data that hadn't been put through any obfuscation or anonymisation process. As an experiment, I got a colleague to pseudonymise some data that I was allowed to look at, so he replaced each identifier by a pseudonym, and then I had a look to see what I could find – and I found out some pretty sensitive personal things, some pretty sensitive corporate information, and it spooked me. My project already had a code of practice but it was a bit dusty so I did a complete overhaul.
More recently, there's been a framework for the ethical use of big data analysis brought out by the Home Office and the Cabinet Office and I gave input as part of the advisory board. They workshopped the first draft with members of the public so I answered questions from the workshop participants about data science. That was fascinating because it turned out the sorts of things that the members of the public were concerned about were different from what the experts were concerned about and it wasn't what we predicted.
For members of the public generally, what they cared about was what their data was being used for. They wanted it to be used for something of public benefit or personal benefit to them and they wanted some assurance that it would actually work, that it would be useful. That mattered more than the details of how we were looking after the data. They were ok with us doing things that would have been a big no-no for me, provided that it was in a good cause and would be effective. It was all very pragmatic, very sensible – and very hard to translate into technical rules.
Miranda Mowbray's research has included work on machine learning applied to computer network security, and ethics for big data analysis. She was previously at Hewlett Packard Enterprise Labs, finding new ways of analysing data to detect attacks on computer networks. She is a Fellow of the British Computer Society.
Miranda at Networkshop45
Miranda gave the Networkshop keynote presentation on 11 April 2017 at 14:45 on machine learning for network security.
At Digifest this year, our startups competition had a twist and nine teams pitched live to a crowd of sector experts and peers in an attempt to bag the grand prize of a support package worth up to £20,000.
In this podcast we chat to some of the winners about how their startups stand to shape the sector. Jonathan May, CEO and founder of Hubbub, and Donald Clarke, CEO of the people's choice, Wildfire.
Seamless W-Fi across public services could transform everything from disaster response to health and social care. As 'eduroam for the public sector' is rolled out across the country, we explore how it is working.
The city’s emergency response teams – fire and rescue, police and the Highways Agency – couldn’t use their normal buildings, which were waterlogged, to coordinate the response. One year later and the city would have had govroam, in place as part of the Yorkshire and Humberside Public Services Network (YHPSN), which would have meant they could all access their networks from elsewhere. With govroam, the city’s integrated response plan shouldn’t flounder.
Disaster recovery is an extreme example, but the potential of federated roaming technology in public services should not be underestimated.
“It’s massive,” confirms Jon Browne, YHPSN programme lead. “The free movement of people between organisations is going to be critical and govroam is one of those fundamental building blocks. By itself it doesn’t do anything, it attaches you to a network and authenticates you to use a connection back to your organisation. But it’s what that then enables you to do…”
“Seamless” is the word that comes up again and again when talking about user experience: and as Leeds’ flooding response proves, that one simple capability unleashes a tidal wave of possibility.
govroam is eduroam for public services: the same technology and philosophy, for a different community. It allows public services staff visiting another connected institution to log on to Wi-Fi using the same credentials they use at their home institution. Once the profile is installed, the connection happens automatically, without the need to register individually or reconfigure the device when you arrive at each new site.
There are many ways of doing this, but eduroam was the obvious model of choice: it has reliable, tested technology, open source radius server options and simple, non-proprietary architecture. It can scale to support any number of sites and isn’t limited to specific Wi-Fi technologies – and everything can be installed, supported and managed by a single point of contact.
Individual councils and public service networks (PSNs) have been working on regional roaming capabilities for years. Yorkshire and Humberside have well over 60 partners wanting to use theirs – “they’re queueing up!” says Browne – including all county and unitary councils, three-quarters of police forces, two transport organisations and most health trusts.
“And that’s just in the region,” Browne continues. “You’ve then got border areas such as Bradford, who want to go over into Lancashire, but can’t at the moment – as soon as you’ve got govroam, you can go anywhere.” It became clear that public services were facing a stark choice: fragment into multiple, incompatible islands, or standardise. The national infrastructure went live in September 2016.
The challenge of going national, says David Hayling, head of IT infrastructure at the University of Kent, was “establishing trust in terms of understanding each other’s requirements”. eduroam provides a simple internet access mechanism for students, staff and researchers.
Public services handle sensitive data on deliberately siloed corporate networks. Setting up shared PSNs – a link between organisations – is one thing, but authorities were struggling to cooperate further because of differing security requirements. “Each legal entity – county council, borough councils etc – had to individually sign to say they complied, and provide documentation to demonstrate that,” explains Hayling.
It meant multiple, duplicating, inefficient systems. To share wireless access, you need to provide public Wi-Fi or temporary guest permits. If people are working remotely, they need 3G/4G dongles.
“The higher education people were in these meetings,” recalls Hayling, “sitting there aghast: we cracked this years ago! We’ve got the answer and 20 years’ experience. eduroam works.”
There are certainly challenges applying it to public services, he concedes, but nothing govroam can’t accommodate: “it’s flexible enough to separate out layers of what people are trying to achieve.”
govroam uses end-to-end encryption (AES as part of 802.1X tunnelling) to ensure private user credentials are only available to a user’s home organisation for authentication; they are never exposed over the air or accessible by the visited site’s infrastructure, so spoof networks set up with the aim of harvesting credentials have very little opportunity to access them. It’s easy to use, but also removes the opportunity for user error. Fundamental to the trust model of govroam is the assurance that all users are bona fide government workers or their representatives.
However, while the authentication and access security checks are protected, the communication is still Wi-Fi, so the answer for Hayling was as simple as reminding people to separate out two ideas: the network they’re connecting to, and the level of security assurance they need. govroam allows participating organisations to still deploy their own encryption or VPN.
“That’s what the National Cyber Security Centre does with its stuff,” Hayling says, “VPN back after connecting through govroam”. What govroam offers is the assurance that “you are connecting to a genuine network you can trust, but to keep a higher level of assurance you take your own steps. If a school provides students with mobile devices, for example, the school configures the device to provide internet filtering necessary to the user of that device.”
So what’s the potential of this for public services? The first is simply efficiency. Imagine a child is going to be spending a long time in hospital and needs education on the ward. The schools network can deliver that, so teachers come in and use govroam to connect. But say the child also needs social care support; their care worker can also come in and complete their notes. The NHS has plans to introduce Wi-Fi into all hospitals, but only with a roaming network would doctors be able to access patient records quickly and securely during rounds. Simply being in a different location need no longer be a barrier to working in a familiar way.
But govroam could also be transformational. Take integrating health and social care. What if Wi-Fi providers support govroam on a backchannel, suggests Browne; when care workers do home visits, they could automatically connect and access client records. He tells of an elderly woman who fell down in her bathroom. She couldn’t move so an alarm system was useless, but sensors installed in her house noticed she’d gone in but hadn’t come out, and called an intervention.
“Now if the care person going in had govroam,” Browne posits, “they could straight away look up the integrated care record and know ‘don’t give this person paracetamol, she’s allergic’, even if it’s not her regular care worker”. Together, it means you can enable people to continue to live in their own homes for longer and save money on care visits. “You’re saving lives through new technology backed off into govroam. Now we have a national solution, we can start thinking about these things. Suddenly we’re building a picture very different to the current situation.”
govroam will also be critical in helping local authorities cope with budget cuts. Site sharing with govroam would enable multiple organisations to share a physical location and connect over a single network connection. “So many public sector buildings are inherited from the Victorians,” says Browne. “It’s expensive, inefficient and in the wrong place. We’re shutting police stations, so police now have to drive to their beat, which adds expense and delay because that becomes part of their shift”.
Why not repurpose parts of local libraries, he asks. “govroam can give police access to their resources. Now you’ve got a reason for keeping the library open and a bobby starts his beat in the right place. These sorts of technologies free you up from being restricted to certain buildings and certain places. It can have a massive impact on the way we deliver services.”
Leeds is developing multi-tenanted sites, explains Browne, “in which you have people in one building working with police, health and parole services”. Using a single connection with multiple corporate LANs delivered over it, each organisation specifies the data layers it wishes to use, and these are delivered over one connection and then broken out for use by each organisation.
“That used to require segregation within offices: police desks here, social care there. With govroam, any worker can use any desk: your govroam authentication is captured by the building itself. Once it’s authenticated you, it works out the appropriate conditional access and reconfigures the network to extend, say, the police corporate network to your device. For that session, while you’re there, that desk becomes a police desk. If the next person to use it is in health, it then extends the health network”. This doesn’t just save money on desks and floor space: “it’s a way of encouraging interaction, freeing people up, promoting collaborative working. Anyone can operate anywhere. That’s massive.”
This kind of thinking is just the beginning, for Browne. Once it’s fully integrated into the public sector’s ways of working, he believes “they will use it in ways I can’t even imagine. govroam’s potential is limited only by our imagination.”
For now, there is a formal agreement that govroam and eduroam don’t connect, but it’s fundamental in many spaces that both are in place. NHS hospitals, providing care and training simultaneously, are a case in point. All they need to do, says Hayling, “is deploy a two-broadcast service – govroam, eduroam – with appropriate configuration behind the scenes, and they could deliver all the services they need to all their students and staff.”
People using it would only need to configure one device to use these two services: it would connect you automatically to the appropriate network. And, he says, “if organisations do it at the same time, the extra cost and configuration effort is small.”
“In Kent,” says Jeff Wallbank, former head of Kent Public Service Network, “every local authority has rolled out govroam: all public sector organisations work in any buildings, and are setting it up in parallel with eduroam. The question now is, can universities and colleges roll out govroam in their buildings?”
Kent PSN has more than 370,000 users across nearly 1,200 sites including health, schools, universities, fire and rescue, FE colleges and business parks, district and county councils, libraries, hospices and leisure centres. govroam itself is currently available at 250 sites and growing.
“Kent helped really drive this forward,” says Matt Ashman, founder and director of Khipu Networks, the company which helped delivered the pilot and now offers a govroam “one-stop-shop service” for organisations who don’t have the skills or time to configure their own in-house systems (Jisc runs govroam; Khipu offers the option of a fully managed commercial service which enables public sector organisations to offer eduroam and govroam fully support and maintained). Kent, continues Ashman, “is the flagship project which has delivered govroam across the entire county”.
“The challenge," he reflects, ‘is we have to get a community together – there is no point deploying govroam into one hospital as a standalone, we need lots of hospitals where staff are working together”.
Wallbank also wants to see a common standard rolled out across the country. govroam was recently deployed for the first time in London, and the next step is convincing central government, whose sites are more widely dispersed: job centres, HMRC centres, prisons, courts and more. All these services tend to share buildings with other public sector organisations, but operate on separate networks. Wallbank wants “a set of standards; govroam or some form of national roaming needs to be standard. Then theoretically anybody delivering services can occupy anywhere with ease, temporary or permanent.”
Wales already has an aggregated public sector broadband service. “A simple SSID change, link to Jisc’s central national server and it’s rolled out in Wales… We’re having the same conversation in East Sussex,” Wallbank continues. “Connect their regional radius server to Jisc’s radius server, we’ve got govroam.” Then, who knows.
It won’t be long before a trickle becomes a deluge. “Senior officers in local authorities come up to me,” says Wallbank, “and say, ‘govroam isn’t half good, why haven’t we done this before?’”
Find out more at Networkshop
Mobility is one of the topics we'll be covering at this year's Networkshop, which taking place in Nottingham from the 11-13 April 2017.
Join us on the first day of the event for the parallel session on this topic, including a talk on govroam. Full details for all this year's sessions can be found in the Networkshop45 programme.
You can join the conversation on Twitter using #nws45.
There's a lot of interest in the next generation of digital learning environments and it was one of the topics we covered at Digifest 2017. We caught up with Lawrie Phipps, our senior co-design manager, who spoke with Ange Fitzpatrick from the University of Cambridge and Elizabeth Ellis from the Open University, about what those environments might look like.
For world-class universities keen to make the most of finite research resources, there’s an overwhelming business case for using Assent, argues Jisc's Peter Atkins.
If you’re a researcher, you’ll recognise the issue immediately. Accessing multiple resources – from leading physics facilities to high-performance computing (HPC) – can mean passwords, passwords and more passwords, not to mention the struggle of dealing with X-509 certificates.
Lydia Heck, a senior computer manager in the Institute for Computational Cosmology at the University of Durham and manager of the DiRAC data-centric system there, says the process of acquiring grid certificates (which expire after a year) and proxy certificates (which can expire before a job is completed) can be “an ordeal”.
From a researcher’s point of view, a complicated access process is an obstacle to collaboration and a drain on precious time.
Among the other issues is that researchers are often signed into multiple facilities via different authentication processes, so it’s difficult to determine who’s using which site or facility, when.
And there’s a security angle, too: without a system tied to a university login, it’s hard to protect against “rogue users” – ex-graduates or ex-employees who use unexpired logins, using hard-won research time without authorisation.
What is really required is a system tied to a user’s university login.
Jisc's Assent service allows researchers to log into high-end research facilities and web-based resources with a single, university-assigned ID. All a researcher has to do is remember a single username and password.
Assent acts as a “trust broker” between the research facility (the “service provider”) and the researching university (the “identity provider”) – both of which subscribe to, and are trusted by, Assent.
Bill Pulford, science IT coordinator at synchrotron facility Diamond Light Source, has been leading a project to set up Diamond as an Assent service provider. Simplifying sign-in, he says, should help streamline workflows for researchers.
Pulford explains how it works in practice:
“Imagine a scientific research project such as finding new antibiotics to counter bacterial resistance; that is likely to be a collaboration involving a lot of different facilities, but Assent could help access these facilities during experiments.
“So you could log into Diamond and then access local instruments, an NMR (nuclear magnetic resonance) facility, a protein production factory, computing clusters and perhaps different small resources as well, at the same time.
“Collaborators could do the same from other facilities, and everyone would have the same view, which helps improve workflows as people acquire and analyse data. Without a common credential, and Assent technology, this is likely to be harder to manage.”
From the point of view of the facility, tracking Assent sign-ins could bring other benefits. Pulford adds:
“We’re under increasing pressure from government to record publications made by people who’ve done work at Diamond and to register publisher-centric ORCID identifiers (which identify individual researchers). Adopting Assent helps us harvest these ORCID IDs within the infrastructure that we’ve adopted.”
It’s reasonable to assume that such data could be useful to universities too – ultimately helping them take ownership of research usage data, and deriving academic value from statistical links.
And as for the rogue user problem, Assent stops it dead.
"One of STFC's roles is to provide resources to researchers. As much of STFC's research infrastructure is expensive, we need to make sure it is used correctly by authorised people. A user who registers with a user office will get a credential from us, but it would make our lives easier if they could use credentials they hold already.
“The more credentials researchers have to manage, the more difficult it is for them – and the more likely it is they forget a password or to update their address when they change jobs."
But with Assent, says Jensen, identities are maintained separately from resources, so if the university email access is revoked, so are Assent privileges.
Chicken and egg
Establishing a trust network of organisations, however, can be something of a chicken and egg problem. Notwithstanding the valuable future benefits, universities’ IT teams may want to see Assent working “in the wild” before committing resources.
One short-term issue is that there is some technical work for universities to do, including setting up as an identity provider, before using Assent, which means making a business case for it.
Universities also need to install software on users’ desktops for Assent to work. Windows and Linux support for that technology is ready, and Mac support is imminent.
From her testing of Assent at Durham, Lydia Heck has noted that it takes some effort to work out how Assent fits together and acknowledges that it requires advanced technical knowhow – Jisc is looking at improving the documentation to help.
In the long run, the benefits of Assent look set to outweigh the short-term issues and, as initial users get the word out, it is hoped the trust network will grow, thus enabling more people and organisations to benefit from more effective, collaborative research.
At the same time, there are efforts to codify the issue of researcher sign-in. A pilot project, led by UCL and funded by the Engineering and Physical Sciences Research Council (EPSRC), is attempting to create a national infrastructure for research authentication. Assent looks set to be a cornerstone of this, so investing in it is a sensible strategy.
Ultimately, the essence of Assent is user simplicity – which makes business sense in the long run. “Let people concentrate on the science,” argues Pulford, “and we worry about the infrastructure.”
Jisc, and CLIR, a US-based community-building, research, and leadership organisation, serving academic and cultural institutions, will explore transnational collaboration around the development of digital libraries and research data repositories.
The partnership will also focus on the professional development needs of their sectors, and shared services that could reduce costs, create greater efficiencies and better serve the academic research community.
“I am delighted at the prospects of working with Jisc to explore interdependencies across our organisations,”
said CLIR president, Charles Henry.
“Our approach will be more vigorous than collaboration or cooperation; we will explore integrating services, tools, platforms, research, and expertise that enhances the capacity of our constituencies, by providing an array of services and programmes we could not offer separately.”
The two organisations have agreed to work toward the following shared goals:
Advance skills and expertise relating to digital proficiency to help achieve the possibilities of modern digital empowerment for current and subsequent generations
Promote the highest quality of content and connectivity for digitally based education programmes, inculcating best practices and sharing of the most effective, robust tools and applications
Promote the development of a coherent, well-managed digital environment in support of innovative teaching and research, facilitating communities of learning and practice, and stressing the interrelatedness of all electronic-based academic efforts
“We’re excited to be working with CLIR, to both increase the savings and services already on offer through members, and to collaborate on our organisations’ areas of expertise for mutual benefit."
A joint CLIR/Jisc programme board will be established to develop, monitor, and report on collaborative projects going forward.
CLIR is an independent, non-profit organisation that forges strategies to enhance research, teaching, and learning environments in collaboration with libraries, cultural institutions, and communities of higher learning. Through its programmes, which include the Digital Library Federation and Digitizing Hidden Special Collections and Archives, CLIR aims to promote forward-looking collaborative solutions that transcend disciplinary, institutional, professional, and geographic boundaries in support of the public good. To learn more, visit the CLIR website.
"Institutions are bringing this data together into a central database, not just using it for learning analytics, but they are also very keen to make that data accessible and available for students to see" - Rob Wyn Jones, our senior data and analytics integrator, shares an update on learning analytics.
85% of further education (FE) students think an activity tracking app for learning and teaching would be helpful, finds our new survey.
Further findings show that 80% of FE students would be happy to have their learning data collected if it improved their grades, and more than half would be happy to have their learning data collected if it stopped them from dropping out.
These findings are unsurprising, as the survey also found that 99% of FE students think that technology is becoming increasingly important in education. A further 76% of FE students surveyed who think technology is becoming increasingly important – think so because it makes life more efficient.
It’s no secret that sales of activity tracking apps and wearables have boomed, with The Telegraph reporting that over three million fitness trackers flying off the shelves in the UK each year. From monitoring our fitness and sleep and even our mental reflexes, self-improvement is officially the name of the game.
With FE students shouting out for the education sector to embrace both technology and the self-improvement movement, it seems that it’s high time for learning analytics to take centre stage.
This year, we will be releasing a learning analytics student app (part of our effective learning analytics project) so that students will be able to see how their learning activity compares with others and set targets to do better in their courses. This will not only benefit students, but staff members too, who will be able to view a dashboard showing the learner engagement and attainment of their students, allowing them to better target students who might be struggling with the course, and prevent drop-outs too. The app will also help staff members to better understand how to make learning more effective.
"It’s great to see that students are as motivated about the creation of an app to improve their learning experience as we are. The app allows students to set their own goals and monitor progress, and with such apps becoming commonplace in other sectors, it’s time that education reaped the benefits of such technology too.
At Jisc we believe that digital has the power to transform and revolutionise education, and our work with learning analytics is an important step in the right direction."
Startup Wildfire have been awarded investment and support to further develop their artificial intelligence (AI) answer to Wikipedia.
Wildfire was announced the winner of our edtech startup competition at our annual Digifest event. The company will now get access to an intensive business accelerator programme and be provided with a support package to the value of £20,000, to allow them to develop their business and successfully deliver a fully-fledged product to UK education.
Wildfire have created the world's first AI content production tool that creates online content in minutes not months. Wildfire takes any document, PowerPoint or video and creates high-retention content in minutes. It’s based on recent academic learning theory on active 'effortful' learning, retention and recall.
Entrants to the edtech startup competition were subject to a panel interview and a pitch to Digifest attendees, who had the chance to vote for their favourite startup using the event app. The winner was then selected based on the vote and interview process that took place at the event.
Previously known as the Summer of Student Innovation, the competition is in its fifth year and seeks to find educational technology products that will benefit UK higher education, further education and skills.
After beating the competition to the investment, Wildfire CEO Donald Clark said:
"We're absolutely delighted to win. Not only did we win, we came top of a live audience poll which is pleasing! It’s great, it’s not just about the money, it’s about access.
It’s about Jisc helping me and other entrepreneurs to open the door to the addressable market of higher education, and also further education and apprenticeships and workplace learning."
"We were excited by all the innovative ideas presented to us, each in their own way could improve and enhance UK higher and further education.
For members of the Jisc the panel, Wildfire really stood out but the panel were really impressed by the quality of entrants to this year’s competition so will be awarding up to £100,000 of funding plus support to five startups: Wildfire, Hubbub, Lumici Slate, Ublend and VineUp.
The competition has shown that the sector is set for a really exciting period of digital transformation, with edtech startups reimagining the world of education and the way it works."
Hubbub also won a slice of the prize. Their aim is to build a culture of giving in universities, making alumni, student and staff fundraising fun, engaging and inclusive. Hubbub helps university fundraising teams build strong relationships with students, staff, and alumni, engage them as volunteers or ambassadors, acquire new donors, and convert donors into regular supporters.
Jonathan May, CEO of Hubbub said:
"It’s the first engagement we’ve had with Jisc, we’ve been on the edge of edtech and have been for a long time. We were looking for good ways to engage and programmes to participate in. When the opportunity emerged we were delighted to see the quality of the competition.
The depth of the questioning and the amount of work that we had to do to persuade people that this (Hubbub) is a good thing to invest in. We’re really excited to be working with Jisc because the impact of the work could lead to a huge amount of money coming into the higher education sector."
New research highlights the growing importance of higher education staff being capable of delivering technology-enhanced learning experience for students.
A survey of 1,000 16-24 year olds, commissioned by Jisc, found that three quarters (75%) of higher education students surveyed believe that having staff with the appropriate digital skills is an important factor when choosing a university. 99% of students think that technology is becoming increasingly important in education, while 62% believe technology keeps them more engaged.
“In today’s digital age, it’s crucial institutional leaders stay up to date with digital trends and grasp how to leverage new technologies if they wish to deliver an enhanced learning experience to their students. Possessing technology and understanding the digital world is no longer the sole domain of IT managers, all student-facing staff need to be digitally savvy.
A student’s expectations of a university are shifting, they live and breathe the digital environment and seek the same qualities in their university and its staff. If an institution wants to be an effective and attractive organisation, it has to also live and breathe the digital world.
Institutions that want to remain competitive need to commit to developing a digitally-skilled workforce and embed digital capabilities into recruitment, staff development, appraisal, reward and recognition. With these results and the growing level of competition both home and abroad, universities should recognise this shift and ensure the digital agenda is being led at senior levels within their institution. Any universities that fail to do so, put themselves at risk of becoming irrelevant.”
The potential of technology-enabled learning was a key theme discussed by further education and higher education managers, and sector experts at this year’s Digifest.
This year Jisc’s edtech startups competition has a twist. Next week, nine teams will pitch live to a crowd of sector experts and peers at Digifest, in an attempt to bag the grand prize of a support package worth up to £20,000.
From an AI content production tool to the ‘Spotify for textbooks’, what are the next big startup ideas that stand to shape the sector – and who would YOU like to win?
After their pitches, entrants will undergo a panel interview, and audience members will vote for their favourite startup using an app. The startup with the highest percentage of the public vote will go straight through, and four other startups will be decided based on the vote and interview.
Tackl is a smart university recruitment platform. It simplifies university recruitment process by building a university talent inventory and the access to it. As a result, students can get much higher exposure to matching opportunities and employers, and employers can search/hire students or new graduates quickly and with ease.
Tasha Hyeongyeon Choi is a founder of Tackl. Having an education background in both engineering and economics, she is ever enthusiastic about ‘bringing cool technologies to the market and package them into cool products or services’. Previously, she has worked with popular Silicon Valley tech companies like Dropbox, Pinterest, Cooliris (acquired by Yahoo), and Quid. Tasha launched her tech startup in 2015 to build a data-driven university recruitment platform to close the talent gap between academia and industry. Her goal is to build the next level solution for both students and employers and make university recruiting great again.
Ublend is a modern communication platform dedicated to education, it’s: ‘…the easiest way to unite your class online, giving students and instructors an intuitive and inspiring space to communicate, collaborate and share class material.’ Ublend allows for collaborative revision, gives instructors essential privileges, and supports ‘rich educational content’ such as equations and code.
Anders Krohn, co-founder and CEO, studied at five universities on three continents, from a small community college in Georgia, US, to the University of Oxford. He is the co-founder of Project Access, a non-profit levelling the playing field in admissions to top universities and an adviser to Young Global Pioneers, a non-profit establishing diverse talent networks across the world. Anders is a Fung Scholar, a Rotary Scholar, and a World Economic Forum Global Shaper.
VineUp provides web and mobile applications to universities that enable them to leverage the collective knowledge within their alumni network for mentoring and career development opportunities for both their students and young alumni alike.
Luke Deering MD started his career with Estée Lauder in NYC, before taking a job with a Battery Ventures-funded startup called Panjiva. He left Panjiva in 2010 to work on what ultimately became VineUp.
The world's first AI content production tool that creates online content in minutes not months. WildFire takes any document, PowerPoint or video and creates high-retention content in minutes. It’s based on recent academic learning theory on active 'effortful' learning, retention and recall.
CEO Donald Clark has 30 years experience in the online learning business with a successful track record as an entrepreneur. He’s a professor at the University of Derby and has taught in higher education institutions in the UK and US.
Fluence uses technology to close the gap between what experts say, and what audiences need to know. Users can upload course and student content to Fluence, to reveal the educational priorities of the course, the knowledge gap between subject and learner, and the optimal learning curve for delivering the content to students. The technology allows universities and training providers to improve student engagement and to cut undergraduate drop-out rates.
David Hoare, director, has a background in computational linguistics and education modelling. He spent the last five years at the helm of Rapid English, which delivers specialist literacy support to vulnerable people out in the community. Since 2016, David and his team launched technology spin-off venture, Fluence.
Hubbub builds a culture of giving in universities, making alumni, student and staff fundraising fun, engaging and inclusive. Hubbub helps university fundraising teams build strong relationships with students, staff, and alumni, engage them as volunteers or ambassadors, acquire new donors, and convert donors into regular supporters.
Hubbub is made up of a UK team of 15 spread across London and Bristol, with a two-person US office in Michigan and a new office opening in Connecticut in 2017. CEO Jonathan May is one of the leading experts in digital fundraising in the UK, is a More Partnership Associate, and 2017 Winston Churchill Fellow studying alumni networks in the US.
Bibliotech is the Spotify for textbooks. It’s a web app providing online access to thousands of textbooks and learning materials for an affordable monthly subscription fee. The unique model works for students, academics, universities, and publishers.
David Sherwood is a Rhodes Scholar, co-founder of teachlearngrow.org.au, and was responsible for raising and managing £150,000 to provide 10,000 hours of free tutoring and mentoring.
Goodwall is a social network where 14-19-year-olds build profiles to show what they have achieved, connect with universities and fellow students, find learning and internship opportunities and win scholarships and awards. It has a fast-growing and inspiring community of over 850,000 students from over 150 countries.
Osnat Shostak leads partnerships at Goodwall. She is passionate about education and helping students gain access to the right opportunities for them, whilst obtaining tools and guidance to realise these opportunities.
Samuel Inkpen leads business development in the United Kingdom, a key market for Goodwall. Sam is passionate about higher education, having begun his career at the London School of Economics and King’s College London in external relations roles.
Lumici Slate is a cloud-based lesson planning and delivery tool. Teachers can plan individually or collaboratively and share their plans with colleagues and students, wherever they are based. The software includes high-quality resources and makes it easier for teachers to share the ones they have developed or found and deliver lessons that their students are engaged in. Because it is online, it places no burden on an institution’s network and can be accessed by everyone from any device and wherever they are based.
Lumici hopes to remove the problem of lesson planning that impacts on teachers across all stages of education.
Atif Mahmood, founder and CEO, is a former teacher, director of learning technologies and educational technology project manager at Cambridge Assessment.
As a teacher, Atif saw first hand how using technology in the classroom can boost learning but also how difficult planning and applying it well can be for some teachers. He used this vision, along with his teaching and commercial experience, to develop his idea and launched the Lumici Slate in 2014. The aim of the company is simple, “to harness the power of technology to improve learning.”
Join us at Digifest 2017
We'll be celebrating the power of digital in education at Digifest 2017, taking place from 14-15 March at the ICC in Birmingham.
The winning startups will be announced at the event.
Is digital technology making fundamental changes to learning and teaching, transforming it in ways that were unimaginable before the advent of the internet? Or has the digital revolution been overhyped as magical pixie dust that can cure all teaching ills?
Digifest is pitting two experts in the field against each other in this big digital technology debate.
Neil Morris, director of digital learning at the University of Leeds, is arguing for the motion that digital technology is fundamentally changing learning and teaching.
Amber Thomas, service owner: academic technology support at the University of Warwick is arguing against the motion.
Digital technology is fundamentally changing learning and teaching - Neil Morris
The widespread availability of mobile and desk-based devices with incredible computing power and functionality means that learners are now able to consume and interact with learning content provided by their teachers, by their peers, and by individuals and organisations around the world. And they can do this in ways that were not possible before the widespread advent of the internet.
That’s a fundamental shift in the way that education is available to learners, not least because it makes it accessible to those who would have previously found it extremely difficult to enter formal education. On a global scale, people are now able to learn in ways that would not have been possible without digital technology, for example using massive open online courses (MOOCs).
I think there are three main things that digital technology is changing, none of which were imaginable before we started to integrate digital technology into education.
First, it’s about the flexibility of learning, which means being able to alter the place, the pace and the mode of learning. The growing use of blended learning provision, hybrid and fully online distance learning courses is offering choices for learners about how to integrate their education with other aspects of their lives. This is a fundamental change in the access to learning as a result of digital technology.
Secondly, there is a fundamental change in the way that learners are able to gain knowledge, skills and competencies through the use of technology, which is going to be useful for their future employment in our increasingly digital world. Learners are gaining digital skills when learning online and educators are increasingly recognising the need to focus on honing these skills to cope with the massive amounts of information that needs to be searched, refined, categorised and understood.
Finally, there’s a fundamental change in the way that learners are able to interact with other individuals, both their peers and educators, from all around the world as a result of digital technology. This is supporting increased cultural awareness and globalisation.
From the teaching perspective, digital technology is enabling teachers to create more interactive, engaging, flexible learning materials in a range of digital and multimedia formats and make them available to students online. Educators are also able to teach in a variety of different ways in the classroom, through the use of in-class technologies, online materials and students’ own mobile devices. These changes are enabling educators to have a more diverse set of pedagogical approaches to support their learners, which means that they can be more inclusive in their teaching methods.
Digital technology supports teachers’ in-class activities, it supports their online content and it enables educators to interact with learners via online classroom technologies. This enables them to be more flexible in the way that they communicate with learners so that they are not limited to face-to-face meetings in their office at set times.
Overall, it is undeniable that digital technology is already fundamentally altering learning and teaching. However, there is so much more transformation that is required and is possible with digital and this full potential will only be realised by organisations and teachers recognising that change is needed, and investing in the infrastructure, strategy and development needed to support it.
Digital technology is not fundamentally changing learning and teaching - Amber Thomas
It is undeniable higher education is changing. But is digital technology the cause or the symptom?
The drivers for changes in teaching and learning in higher education are socio-economic, related to the way student fees are funded, changes in the job market, the currency of a degree and the skills people need. As a result of those drivers we see technologies used in particular ways.
Over the last ten or 20 years we've seen a massive expansion of higher education, and some of the use of technology that we see is in response to that.
Take lecture capture technologies. Critics argue that lecture capture technologies give universities an excuse to avoid tackling overcrowded lecture theatres. But the need was already there: student numbers have been growing for twenty years and recording has been possible all that time.
The last five years have bought affordable institutional scale solutions, and students have started demanding it. There is now a solution to meet a clearly articulated demand, that's why lecture capture practices are growing. The need already existed. If we don't understand what drives the use of technology in higher education we could be putting effort into areas that aren't going to get traction. We find ourselves promoting eportfolios to the wrong groups of students, when what they're asking for is lecture capture.
Or take MOOCs. It was widely assumed that moocs would be useful for preparing students before they came to university or promoting undergraduate recruitment. The statistics on MOOC uptake show that they are primarily taken by people in their 30s and 40s who already have a degree. The way we thought the story would pan out, with certain drivers and certain forms of demand, is not quite the way that it's gone. We should learn the lessons as we go.
One of the risks of not understanding the lessons is that we buy into the idea that digital technology is magical pixie dust that will fix all the problems. But digital is the end point of the chain. In fact, the real change lies in the enablers to creating a great digital product or digital course - things like changing the way that course teams work, putting real structure into learning designs, course objectives and learning outcomes. That's the work that has the profound effect, not the fact that it's digital.
For people like myself, who work in institutions and are there to help solve problems and support progress in teaching and learning, the conversations we need to have with academics are about what their course is about, how the learning is designed, how their teams are structured and what time they've got put aside for running an online activity. Those aren't technical concerns – and that can be quite disappointing for those who believe that we have the magical pixie dust of technology to scatter over their courses for them.
The danger of the magical pixie dust fallacy is that digital technology is an easy thing to blame. If you've not got your course right, you can treat it as a tech fail. Whereas, actually, the thing that didn't work might have been your learning design or your assumption of students' prior knowledge or the group dynamics in an activity - but it is much easier to blame digital technology. And that's why we really need to understand where it's useful, to learners and teachers.
Otherwise it's all just snake oil.
This is one of a series of features on topics covered at this year's Digifest, which is taking place on 14-15 March 2017.
Amber and Neil took part in the debate, digital technology is fundamentally changing learning and teaching in higher education, on day one. Full details for all this year's sessions as well as many of the presentation slides can be found in the Digifest 2017 programme.
You can join the conversation on Twitter using #digifest17.
Drones, robots and driverless cars – we’re living in an era of science fiction made real, says Jisc’s futurist, Martin Hamilton. But how comfortable are we with artificial intelligence in the classroom?
Let’s spend a moment thinking about science fiction: humanoid robots you could have a conversation with. The ship’s computer from Star Trek, or maybe Orac from Blake’s 7. Robot soldiers that can climb stairs. KITT, the self-driving car from Knight Rider.
For Generation X1 folk like me, these things were always some distance in the future. It was a little disappointing when the year 2000 came and went, and we were still waiting for our jetpacks and holidays on the moon. But have you noticed that all this has changed recently?
The future is here
Now you can buy the equivalent of Orac, the Amazon Echo Dot, for around £50. And there are now over 10,000 third-party “skills” that extend the capabilities of Alexa, Amazon’s virtual assistant.
Here are just a few of the things you can do with Alexa: turn lights on and off in your house; turn the kettle on; control your smart thermostat; listen to an internet radio station; check the weather forecast; check the travel along your commute route – and search for all sorts of information.
Nao has a bigger brother known as Pepper, who we are introducing to delegates at this year’s Digifest. Pepper is four feet high, and glides around on a wheeled base. And yes, you can have a conversation with “him”. What’s more, Pepper was designed to recognise emotion through analysing facial expressions, tone of voice and body movements.
Self-driving cars are starting to find their way onto our roads too – some 90,000 Telsa electric cars have been shipped with Autopilot, which gives the car limited autonomy, and the company recently stated that all new models ordered with Autopilot would be capable of full autonomy. Teslas with Autopilot can already park themselves and come to fetch you, just like KITT.
We might not have our jetpacks yet, but there are a lot of companies working on human scale drones, such as China’s Ehang. Ehang have made a single passenger driverless drone, which will be going into service as an air taxi in Dubai this summer. Crucially the Ehang drone flies itself, which should mean that the human passenger does not require a flying license.
Are there any common threads in these developments?
Firstly, there’s the massive computing power that’s now on tap through cloud computing like Microsoft Azure or Amazon Web Services. Secondly, there’s the artificial intelligence (AI) and machine learning that underlies Alexa and Autopilot. Thirdly, we are now working with data on an industrial scale thanks to the proliferation of smartphones, tablets and apps.
These three trends have come together to make all sorts of new things possible. One of my personal favourites is the use of AI in Google Photos to find categories of picture automatically – for instance, pictures with a particular person’s face in or pictures of cats.
Just like when you first use Alexa, this feels distinctly like magic. However, much of the underlying technology has been made open source, so you can get under the hood and tinker if you want to. Google’s TensorFlow is probably the key piece of software for that kind of exploration.
The next generation
And now we’re starting to see the next phase, which is when the AI goes off and creates something of its own. Google’s Deep Dream project takes this approach to create surreal and psychedelic art by enhancing particular aspects of an image file on request, eg “make it more cat-like”. We’ve also seen generative poetry, music and other creative arts.
At this year’s Digifest I’m taking a look at the rise of the robots and artificial intelligence, and what it could mean for teachers and learners. We’re already starting to see online learning apps that take an AI-based adaptive approach, acting as a coach and mentor by reinforcing key concepts with which the learner is struggling. I suspect we’ll quickly come to see AI-based assessors, careers advisers, and find AI in all kinds of other job roles.
For educators looking to exploit the potential of AI, this aspect throws up a lot of interesting questions. I picture my own kids going to visit the robot careers adviser, perhaps a descendent of Pepper, and questioning its recommendations.
Today’s AIs are driven to a large extent by what we could call pattern recognition – here are a million pictures, this half are pictures of cats. Now here’s a new picture – how likely is it to be of a cat? If we swap cats for careers, then the best that Pepper could probably say is “kids like you mostly went on to college”, or “you look like someone who likes to do hands-on work”.
How will we respond if Pepper says “give it up - you’re never going to amount to much”? Or to put it another way - what do we do when the computer says no?
But before we get too carried away, it’s important to note that today’s AIs are much better at recognising patterns than they are at coming up with stuff themselves. The state of the art is a technique called Generative Adversarial Networks (GANs). GANs work by pitting a neural network that generates stuff, such as cat pictures, against another neural network, known as the discriminator, which scores its results - how cat-like is this picture, compared with that million image dataset?
The black box nature of this process makes it hard to ask perhaps the most fundamental question of all – “why?” And if you cast your mind back to those old science fiction tropes, this was often the point at which the rogue computer exploded in a puff of logic!
Find out more at Digifest
This is one of a series of features on topics that will be covered at this year's Digifest, which takes place on 14-15 March 2017.
Martin will be giving his talk, loving the alien - robots and AI in education, during day one of Digifest in the morning. Full details for all this year's sessions can be found in the Digifest 2017 programme.
Digifest attendees will have a chance to meet Pepper the robot, as well as sample the latest technology in development, in the Digi Lab.
A 21st-century university or college needs to be social media savvy to thrive, if not survive. So why is there still so much resistance to exploiting social media opportunities in education? Eric Stoller looks at why some educators are stranded in the foothills of the social media mountain, and offers inspiration - and suggestions of the best apps and networks - from those who are at the peak.
It’s a common manoeuvre for bloggers to write up lists about the “best new apps” for teaching, learning, and engagement on the basis that there’s always something on the cutting edge that might be new for educators. However, the reality is that the rate of adoption of new technologies within higher education is always going to be less about the availability of whizzy new tools and more about a willingness to learn new things. And that comes down to whether or not experimentation (and sometimes failure) is rewarded.
In 2017, an interconnected, digitally-engaged university is the norm. Yet, there is still a lot of resistance to using social media within education.
In my blog post on why educators can’t live without social media, I presented a clear set of ideas, concepts, and arguments that offered ample opportunities for educators to use social media within the context of their work. Teaching, learning, retention, marketing, career development, digital literacy, engagement, and the student experience are all positively impacted by well-thought-out social media initiatives.
Educators are role models for their peers, students, staff, and anyone who connects with them on digital channels.
Leading from the top
So, what determines why some educators jump at the opportunities social media presents and others do not? I believe that organisational change, growth, motivation, and leadership influence how well social media are used within a university.
For example, a vice-chancellor who tweets or shares snaps is a role model for everyone who works at an institution. That simple act of leading by social sharing on digital channels showcases permission for others within a university to do the same. Digital leadership matters.
This kind of encouragement could be taken further. It’s been discussed within certain digital communities of practice (eg #LTHEchat on Twitter) that social media be added to annual appraisal metrics for educators. Regardless of which channel or tool is used, it’s a measure of innovation, creativity, and growth.
The challenge that social media presents for educators is that there exist myriad opportunities for use within countless networks. The “target” is constantly moving. Change is the norm. Apps and social networks are always evolving. New functionalities and ways of being on social media are frequently released. How do educators keep up with this constant churn? Prioritisation is key, along with building in time for “DPD” – digital professional development.
Social media channels - where are we now?
Given that, which social networks and apps should we be paying attention to right now? It depends, of course, on what you’re trying to accomplish.
Facebook is still the most widely accessed social network on the planet. Facebook Live has been used by educators around the globe for lectures, guest speakers, and open days.
Twitter is a multifaceted communications channel that can connect a variety of stakeholders and is often a “tip of the iceberg” for meaning making, learning, and global engagement. Similar to Facebook Live, Periscope (a Twitter property) offers up live broadcast functionality with social connectivity via mobile devices.
Houseparty, from the makers of Meerkat (a now defunct competitor to Periscope), allows for group video conversations.
While some of this live social functionality may currently be mostly informal, there exists a lot of potential for educators to use these spaces for teaching, learning, and enhancing the student experience. In any case, students and staff are now able to broadcast anytime, anywhere.
However, messaging apps that offer a more closed loop of interaction have also emerged as being highly valuable for student engagement. Take Snapchat or WhatsApp. These are hugely popular apps with students. Both apps offer one-to-one or group-based connectivity that doesn’t exist in a larger public sphere. Other apps to be aware of in this space include WeChat, Signal, and Facebook Messenger.
Daily ephemerality from Instagram Stories and Snapchat Stories provide opportunities for quotidian storytelling that can be useful for a variety of educational aspects.
Student engagement is a major factor in student retention. The more students are engaged while at university, the more likely they are to be successful.
Engagement takes place inside and outside of the classroom. Social media affords student services practitioners the opportunity to connect with students and build community, scaling interactions from just one-to-one to a one-to-many. For example, Facebook Groups have long been a way for administrators and educators to engage with large amounts of students, reaching them whenever and wherever.
Can educators survive without social media? Of course. However, can education survive without educators who are willing to learn how to use digital channels to benefit their students?
Today’s enrolment atmosphere is highly competitive and those institutions that can demonstrate that they are properly connected universities will have an edge with recruitment, retention, branding, teaching, learning, employability, and alumni development.
Digifest 2017 - join the debate
This is one of a series of features on topics that will be covered at this year's Digifest, which takes place on 14-15 March 2017.
Eric will be giving his talk Part Deux: why educators can't live without social media at Digifest during the morning of day two. If you're not attending in person, we'll be livestreaming this session as part of our online programme.
Computers don’t turn up to work hungover, stressed or loaded with unconscious biases, so why shouldn’t they be used for routine interventions with students? They can deal with sensitive situations as well as humans, if not better, argues Digifest debate speaker Richard Palmer.
Learning analytics is going to become ubiquitous in UK education – and with good reason. Tracking, in near-real time, individual student engagement, attainment and progression has been shown to improve the educational experience for students1, leading to better grades and higher retention rates.
Learning analytics provides institutions with huge amounts of data but the crucial point is that it is actionable data. Institutions can use the data about students to predict which students may need support or be at risk of withdrawal or how students are going to be best served by them. It is data that can lead to interventions.
Learning analytics and interventions
An intervention could be a number of things. It could be sending an email or a text message to a student saying, “it doesn't look like you attended your lectures all week – is everything ok? It doesn't look like you handed in your most recent piece of coursework on time – is there anything we can do to help?”
More advanced systems offer advice based on prescriptive rather than predictive analytics. So, for example, if a learning analytics processor notices that, although they are doing the work and obviously showing the intelligence, a student's written assessments are scored lower than other objective criteria, it might ask them if they know about the help available in academic writing classes. There's a really broad range of what an intervention might be.
The trouble with humans...
Should those interventions brought about by learning analytics always be mediated by a human? No.
First of all, humans have a long history of believing that when certain things have always been done in one way, they should stay that way, far beyond the point where they need to be.
If you look at Luddite rebellions, we thought that it should always be a human being who stretched wool over looms and now everyone agrees that's an outdated concept. So, deciding that something needs to be done by a human because it always has been done by a human seems, at best, misguided.
Secondly, people object that the technology isn't good enough. That may, possibly, be the case right now but it is unlikely to be the case in the future. How difficult is it to intervene with a student identified as at risk by a learning analytics processor? Is it harder than driving a car, which computers already do better than us? Is it harder than being a world champion at poker, which we now know that computers are better at than us? Or playing chess or landing things on comets, all of which computers do better than people? Are we saying that it's just this single aspect of human experience that is unique? That seems unlikely.
Technologies will improve. Learning analytics will become more advanced. The data that we hold about our students will become more predictive, the predictions we make will be better and at some point institutions will decide where their cost benefit line is and whether everything does have to be human-mediated. I have no doubt that some universities will adopt at least partial automation of certain interventions in the not too distant future.
Thirdly, how good do we actually think people are? Certainly, human beings can empathise and pick up on non-verbal or even non-data-related signals from other people, but when was the last time a computer turned up to work hungover? Or stressed or worried about something – or just didn't turn up at all?
Computers aren’t intrinsically prejudiced against people of different genders or races or sexualities. Looking at the Harvard unconscious bias survey, 82% of people have some unconscious bias either pro or anti black or white people. Only 18% show little or no preference. 85% have unconscious biases around preferring people who are fatter or thinner. Will a computer ever be better than the perfect person? Maybe, maybe not. But, let's face it, people aren't perfect.
The downsides of machine interventions are no worse than humans doing it badly – and humans are pretty good at doing things badly. We worry about computers sending insensitively worded emails and inappropriate interventions but we all know human beings who are poor communicators, who are just as capable, if not more, of being insensitive.
There's certainly a risk of sending too many emails or texts and diluting the message but that's easily fixed by appropriate development and testing. And with a computer, if you programme it properly, it does what you tell it. In contrast, no matter how good you are at writing policies, humans don't always follow them.
The point at which a human intervenes entirely depends on how good the system is. If an institution sends an email to a student saying that we've noticed you haven’t turned up all week, is everything ok, and the student responds via email, a computer with natural language processing is fully capable of understanding the response.
If the student replies along the lines of “I was busy / I was poorly / I'm all right now, I'll be in on Monday” then there is no real need for human interaction – the computer can deal with it. But, at the same time, a computer can be programmed to know what its limitations are. So if it doesn't understand the response or sees something it is not pre-programmed to deal with in the response, such as a mention of bereavement or money worries, then that's the point to hand something on to a person.
These systems can be programmed carefully and tested properly to know their limitations and can – and will – expand their capabilities over time.
I'm not saying we're there now but if you think this is a sacred cow then you should think about the things you do today that were science fiction ten years ago.
Digifest 2017 - join the debate
This is the fourth in a series of features on topics that will be covered at this year's Digifest, which takes place on 14-15 March 2017.
Richard Palmer will be on the panel for our debate, learning analytics interventions should always be mediated by a human being, which takes place in the morning on day two of Digifest. Full details for all this year's sessions, can be found in the Digifest 2017 programme.
What’s the secret to great technology-enhanced learning in FE? We asked AoC Beacon Awards winners to spill the beans.
“Safety! Emergencies! Safety! Educational!”
reply the children of Springfield to Edna Krabappel’s1 exasperated question: “you’re children, why do you all need cell phones?!” Failing to capture the kids’ attention, their eyes locked to screens and thumbs jerking frantically against a backdrop of bleeps and buzzes, Edna gives up and confiscates the lot.
The Simpsons' scriptwriters have clearly not read the FELTAG recommendation that, by 2016, further education (FE) providers should be delivering 10% of their courses online. It was just the latest in a long line of similar suggestions but investment in technological infrastructure, digital training and blended learning has been prioritised.
We asked award winners what really works on a practical level.
James Kieft is learning and development manager at Reading College, which recently won a national excellence award for introducing cloud-based technology on campus. Edna Krabbapel would doubtless be horrified by Kieft's view that “if you ban technology from the classroom, students are going to use it anyway; you need to embrace it and make sure they are using it appropriately.”
“At first,” he recalls, “it was about getting staff to realise what free stuff was available in the browser, always with a focus on what they want to achieve within teaching" and his role to signpost them to simple tools that work. That could be anything from apps adding voiceovers to presentations, to tools for creating animated videos – and he focused on software that can be found for free online so they would be compatible across a range of devices.
“It removes barriers, and staff are more willing to experiment. Because you’re not having to install it, you’re not relying on one tool – it’s constant innovation.”
Constant change can be source of anxiety, and Kieft quickly identified that the area staff struggled with most was confidence, compounded by a lack of time to self-educate. He insists that one member of staff must be at the forefront, doing the hard graft of researching new tools and suggesting how they could be used in classrooms.
Kieft set up a blog in 2013, called “James thinks it’s worth a look!” - which he uses to share new finds with colleagues at Reading and partner colleges in Banbury and Oxford. The blog is complemented by a YouTube channel for video tutorials, and his team also provide handouts, workshops, demonstrations and a mentoring scheme, where they watch a couple of lessons and suggest where staff could use technology more effectively.
The most successful technological innovation, however, was also the most unexpected. With his team's focus on in-classroom productivity tools, Kieft says he “didn’t realise the Google+ community aspect would prove so popular with students and staff”.
The college had previously used Facebook to connect with learners, but the “issue with that is you’re invading their space and trying to be hip – while Twitter is all in the public domain. Google+ had the advantage of not being widely used, so we’re not treading on their toes but are still connecting them with their tutor and peers in a space private to that community.”
More than that, though, Kieft says there’s evidence these communities promote more collaborative and independent ways of working: “it takes pressure off the tutor in that questions can go via the community and the tutor can just act as a moderator. We’ve even found students contributing to the design of a course, because they’re introducing others in the community to articles and resources they think are relevant.”
Kieft is clear that “you have to set expectations about when and how often you will respond via the community; students could expect a 24/7 service.” But he also insists that technology saves time overall. He mentions the "talk to type" function in Google Docs, which is “a real timesaver” when marking.
“The challenge we all face,” he concludes, "is resource creation. Relying on dedicated digital learning teams to produce tens of thousands of resources is impossible; we need to empower staff and students to help.”
The gauntlet has been taken up at Heart of Worcestershire College, which itself won an award in 2014 for introducing a complete blended learning model that delivered online teaching across the whole of its curriculum. The college is now spearheading a consortium of more than 80 FE colleges, with the aim of transforming the production of learning resources.
Hear Peter Kilcoyne discuss their effective use of technology in FE, which won the college an AoC Beacon Award.
When the consortium was set up in 2015, explains Peter Kilcoyne, information technologies manager, the idea was that
“rather than all working in siloes, colleges should share developments that would benefit everybody”.
Each college pays £5,000 a year to join, which is recouped not only in resource provision but also in-house training and software discounts. “Rather than lecturers having to find free resources or spend ages making them individually,” says Kilcoyne, “they get very high quality resources specifically written by people teaching those courses.”
“This is not just about pdfs,” he continues.
“We develop interactive learning objects with assessments built in, so that the scores go into virtual learning environments where lecturers in college can monitor students’ understanding of what they’ve done.”
The consortium is all about joined-up thinking: linking teaching and assessment within college; connecting up with other colleges.
To date, Kilcoyne estimates the consortium has developed over 1,000 hours of learning resources – “more than any college could have developed on their own” – and during this first full year of usage, the impact is immediate.
Feedback from students is outstanding, and there is interest in setting up sister consortiums with colleges in the US and South Africa.
“Post-Brexit, reaching out to other countries is part of what we do,” Kilcoyne says, and he is keen to emphasise the potential of international co-operation to broaden students’ outlooks and improve their employability.
Moodle and Mahara
Employability is the big added bonus of making students tech savvy, as they have discovered at Forth Valley College, where development support officer Rob McDermott celebrates open-source eportfolio platform Mahara.
It was first introduced to support creative learning and assessment: “video and vodcasting is a big thing for us,” says McDermott.
“Students capture evidence related to course learning themselves, upload it to our multimedia server and submit it using Mahara.” The portfolio offers a simple way to track what they’ve done, record the techniques they’ve used and get feedback via Moodle.
That’s because Moodle talks to Mahara, and is the baseline for everything Forth Valley does. McDermott is a strong advocate of bring your own device and remote software, but he has also found that Moodle remains “very effective”.
It took two years, he estimates, for staff to feel comfortable using it and for the platform to stabilise after installations and updates, but they’ve reached the point now where
“staff are starting to play and think about using it creatively. It started as a repository, but now we’re starting to think how can we use it more proactively for teaching.”
What they’re striving for, McDermott explains, is “seamless integration: everything going through one or two portals, capturing lots of data that works not only with teaching but also support departments.”
Technology, he contends, is a “whole college” approach:
“teaching, learning, technology – we don’t separate them out, they are together and what we do.”
Join the discussion at Digifest 2017
This is the third in a series of features on topics that will be covered at this year's Digifest, which takes place on 14-15 March 2017.
Our talk, FE technology-enhanced learning: creating a digital environment that enables effective teaching and learning, takes place on day one of Digifest at 11:00. If you're not attending in person, we'll be livestreaming this session as part of our online programme.
You can also catch Forth Valley College's Rob McDermott in our workshop: learner engagement - how can you overcome the challenges and develop opportunities to create a creative curriculum, which takes place in the afternoon of day one.
How do students use digital resources? How does this change the way we teach? Digifest speakers reveal how creative teaching, and co-creating with learners, can turn online archives from passive stores of information to spaces for innovation.
In The History Boys, Alan Bennett’s play about a clash of educational cultures, Hector reflects on what he sought from university as a young man: “I wanted somewhere new. That is to say old. So long as it was old I didn’t mind where I went. […] Cloisters, ancient libraries… I was confusing learning with the smell of cold stone.”
A redbrick university disabused him of the mix-up, but in the decade since Bennett’s play we have become just as likely to assume the opposite: that learning is inextricable from the smell of dust-laden PC fans and overheating plastic.
In fact, when it comes to capturing and reproducing teachable content for higher education (HE), the most common technologies are now fairly antiquated. Virtual learning environments (VLEs), PowerPoints and podcasts have all been around for years without substantially changing the way we teach.
However, with digitised resources, more is easily available to students, teachers and researchers than ever before – and growing every year. The question is, what meaningful use can be made of it?
What digital resources can do for starters, says Raphael Hallett, professor of history and director of the Leeds Institute for Teaching Excellence, is “demystify the idea of the historical source” and help students experiment using primary texts much earlier in their degree.
Hallett highlights an undergraduate module on witchcraft in early modern England, designed using Jisc principles of digital literacy. Let loose on the Early English Books Online platform (delivered via Historical Texts) in week one, students are asked to examine anti-Catholic propaganda pamphlets, choose one and annotate it. Year on year this builds a repository of source commentaries.
“The great thing”, says Hallett, “is that it is a collective resource. An MA student writing a dissertation on ‘the demonic’ during this period received access to the feedback, so it fed into postgraduate research as well.”
Teaching has the potential to turn online archives from passive stores of information into resources that students themselves can reshape. Hallett insists that this module is “only possible through the process of digital encounter – it is not possible as hard copy”.
However, he still converts his students’ commentaries into physical copies. It’s a curious paradox:
“students still really like the sense of ownership that comes from an encounter with material texts, but that is only possible through online editing. It counters the idea of the digital as the loss of the textual: this is reborn as text and students like that.”
To take full advantage of this, Hallett suggests universities need “a new student epistemology” which he characterises as “the hyper-visualisation of knowledge, ideas, arguments”.
“Jisc took a risk presenting the library in this way,” concedes Hallett, “but it is probably the most innovative way of presenting information that I’ve seen and certainly the most attractive to undergraduates. Even if it’s just to get them more curious about the collection, it’s a very good way of beckoning them in to primary sources.”
The UK Medical Heritage Library is also a key resource for Keir Waddington, professor of history at Cardiff University, who uses it to structure undergraduate courses on 19th-century public health. Waddington finds that students’ “first instinct is to reach for the digital rather than the material”, and agrees that packaging research attractively means “we can get students looking at data in different ways, not just turning up in the classroom with source-packs.”
This, in turn, means seminars can be more lively and focus on working with a range of sources:
“we can get students to follow things through in a more creative fashion – looking at disease terminology and how it shifts over time, for example – because the tools are available to do that more quickly and easily. Speeding this up allows you to ask more interesting questions in seminars.”
Cardiff’s second-year course in digital technologies and history, meanwhile, places emphasis on the way students can use web-based packages to examine and interpret historical evidence by themselves. “The first part is very much training students, developing skills,” Waddington explains. “The second part is guided projects: mapping where people died on the Titanic, for example, or how medieval kings moved their courts across Europe.”
For Waddington, it’s about “instilling a sense of curiosity and excitement in terms of what information students can find and what innovative things they can do with it – the questions they might ask, directions they can go, mistakes they can make.”
The challenge has always been how to combine play with pedagogical rigour. Technology-enhanced teaching brings with it the need for an assessment revolution.
By and large, regardless of the extent to which digital content is integrated into curricula, assessment methods remain very traditional; exams and coursework essays are blunt mechanisms for gauging the effectiveness of student learning.
Universities have, amid understandable concerns about quality assurance in an era of league tables, been slow to explore the potential for online discussion, collaborative digital projects or even public engagement in student assessment. There is no precedent for examining innovative outputs.
Assessment diversification is, however, being explored at LSE, where senior learning technologist Darren Moon remains sceptical about resource-driven digitisation projects, which he feels are “often low-hanging fruit, well within the comfort zone of both tutors and educational developers and technologists, and often detract from larger organisational issues such as assessment practice and student engagement."
“Resources themselves do not engage students,” he cautions,
“activity engages students, and banks of resources gather dust if they are not used as part of assessment”.
For Moon, “essays represent a huge amount of student time, effort and investment we don’t get any additional value out of as an institution.” Instead, what the LSE wants to do is “create a series of exercises and activities that give students a feeling of greater value as part of the wider school community.”
The most radical example has been in collaboration with international relations on its third-year visual international politics module, which is using student-produced videos as part of its assessment. Designed in partnership with course convener and professor of international relations William Callahan, the course asks students to create 10-minute documentary films to give them hands-on experience of using digital storytelling to craft their own messages.
Now in its third year (and oversubscribed every time), the course has found “a balance between visual and textual practice” that Moon believes gives students “a far more nuanced understanding of the complexity of the decisions involved in visual politics”, namely how visual sources can be used as political agents. These films then become sources in themselves for subsequent years to analyse.
Student as producer
The potential doesn’t stop at teaching and assessment. Within a digital curriculum, it becomes much easier to think about students as potential collaborators.
Watch LSE's video, which highlights key findings from their Student Voice project.
Moon is encouraged by current levels of conversation around students as co-producers of knowledge and active partners in course design. Hallett agrees it’s only a short step from co-creating resources with students to the co-creation of curricula: “the empowerment to change the curriculum is fed by the empowerment students feel when more in control of resources they can find and material they can refer to.”
Students have, of course, always been able to suggest topics for research projects. What technology offers is scope to set their own agendas. If the next step in digital resource curation is as expected – guest comments on archives, annotations that can be accessed and saved by others – then students will be a crucial part of that research community. In effect, they will become researchers.
“There is research-led teaching,” emphasises Waddington, “but there’s also teaching-led research. Lecturers are often doing admin, we’re not always having exciting research conversations among ourselves – those conversations take place in seminars.”
But students are paying for the privilege of being taught, and research repeatedly shows that they remain wary of self-directed study, especially online. “Students still need instructions,” warns Hallett; “they need to be directed and allowed to play in ways that are affirmed by the tutor. If we just send them in they will struggle, but if we give them frameworks, criteria and guidance, they tend to be much more adept.”
We have as yet a very incomplete and inconsistent picture of how students use – or want to use – digital materials, not to mention take part in structured research and learning activities outside the classroom. The assumption that they will automatically engage with web-based course components as enthusiastically as they do social media is premature.
More work remains to be done on the reasons for students’ non-use of the myriad technological resources available to them.
For that reason, the role of technology in universities must be driven from the outset by a clear understanding of intended use, and Moon believes that today “we’re asking many of the same questions we were ten years ago” – and they’re the wrong ones. VLEs, for example, “will inevitably pale in comparison to Facebook or Google, so in terms of giving students a real, compelling reason to use institutionally provided tools we’re always at something of a disadvantage.”
The right questions, he maintains, are oriented not around particular tools or resources, but how students self organise: “if we see zero engagement on Moodle, but it has to be happening to complete the task, they are doing it, so it’s happening elsewhere – where? How?”
For students who are on campus all the time – “is it even right”, he wonders, “to expect them to use an online communication platform as well? And does it matter, so long as we get the structure of activity and assessment right?”
2019-20, Moon estimates, will see the first cohort of undergraduates to have had iPads in the classroom since Key Stage 3, for whom technology-enhanced learning is the norm. What will they confuse learning with…?
UK Medical Heritage Library
The UK Medical Heritage Library stores over 66,000 works from 19th-century medical texts, all digitised across a three-year period by Jisc, the Internet Archive, the Wellcome Trust and university libraries from around the UK.
The library sports an eclectic selection of health-related subject matter, from medical practices to sport, nutrition and pseudo-scientific disciplines like phrenology and hydrotherapy. All publications have been digitised with full colour page images, pdf downloads and searchable OCR texts, as well as an open access ethos that enables researchers to cross-search the collection alongside those held at the British Library, amongst others.
What’s most innovative about the UK Medical Heritage Library, however, are the tools allowing visitors to explore the collection using a variety of different creative visualisations: timelines, ngrams, dendrograms, maps, sunbursts. These highly aestheticised access points allow users to choose their own way of entering the catalogue, offering multiple pathways through the material which only serve to make the potential interpretations more intricate and insightful.
This is the second in a series of features on topics that will be covered at this year's Digifest, which takes place on 14-15 March 2017.
Dr Raphael Hallett from Leeds Institute for Teaching Excellence will be speaking during day one of Digifest for his talk: 'Surfing in the Shallows' or 'Creative Bricolage': how are our students using online resources? If you're not attending in person, we'll be livestreaming this particular session as part of our online programme.
Keir Waddington from Cardiff University will be taking part in our day one workshop, designing digitally-enhanced curricula and LSE's Darren Moon will be a panel member during our debate, 'institutional visions for a digital student experience', which will take place on the morning of day two.