Blog

Governance and Security: Challenges of Ethics, Trust and Consent in the Digital Age

The following two ESRC scoping questions were put to the Ways of Being Digital project’s steering group, which consists of experts from the digital field with backgrounds in academia and the public and private sectors:

  1. What are the challenges of ethics, trust and consent in the digital age?
  2. How do we define responsibility and accountability in the digital age?

Whilst recognising the potential that digital technologies might bring to governance, experts reported that current thinking about the domain of governance and security in the digital age is still at an early stage – a relatively new environment that demands the reassessment of current knowledge about governance and security in the context of digital technologies. This is reflected in the following comment: “Who can speak and who can’t, no one knows, time and effort needs to go into figuring this out, framing the code and the governance is just as important as the content it facilitates.”

Experts who contributed their thoughts to this domain pointed out that it will be difficult to counterbalance the excitement that comes from being able to spread unfiltered ideas across the world against the realisation that this might allow malevolent, criminal, etc. ideas to take hold.  There are security risks linked to use of digital media, there are new points of control and there are new points of power, e.g. the owners of social media platforms.

Experts highlighted that ethical issues exist around security and security technology – for instance, individual privacy, terrorism prevention, surveillance and the use of biometrics and other security technologies, data sharing, data mining, data storage and data fusion, etc.

Authentication was brought to our notice, partly because it is a process increasingly carried out by machines and is crucial to secure communication. The rising momentum of the ‘Internet of things’, whereby almost anything can exchange data over a network, means that every access point becomes an incursion point with the possibility of being breached.  In the event of this happening – and we know it might – then damage has to be limited. Telemedicine, for example, is a sensitive area where password-based authentication just does not provide enough security.

The notion that technology has the capacity to greatly improve all aspects of governance is widely believed and big technology companies now offer a myriad of ICT solutions to hard-pressed local governments as they struggle to meet the demand for increased services. While the increase in public services being transacted online brings many benefits, such as easy, anytime access, the process also gives rise to enormous amounts of digital data, offering further analytical opportunities but also some issues.  Although it is not always clear if this digital information can provide insights capable of informing better, more targeted (and cheaper) services, there continues to be confidence that clever algorithms and computer analysts hold the key to improved governance.

The UK Government has yet to publish a Digital Strategy and we invite commentators to tell us what they think should be included. If you can provide evidence or point us to research that illuminates any of the issues raised here, then please contact the project team.

Data and Representation: How do we Live with the Algorithms that Shape our Lives?

As part of an initial survey of our project steering group, we asked for comments on the scoping questions proposed by the ESRC for the seven domains identified by the review. For the data and representation domain, the question was, “How do we live with and trust the algorithms and data analysis used to shape key features of our lives?

Recognising the vast literature that exists around this subject area, we concentrate, in this blog, only on some of the headline points raised by the experts, who come from seventeen institutions across the world, representing social science, arts, engineering and science disciplines.

Comments from the initial survey did not question the potential for massive advances in computing power and computing skills to enable powerful insights and allow for better decision-making; however, they did reflect a consensus that some areas of this domain are under-researched and not well understood.

They noted the current focus on big data and the commercial possibilities and policy drivers that are informing current discourses about data.  This is reflected in the significant amount of funding available to undertake research into big data within academia, within the policy field, and within the commercial sector. There is therefore already much research in place that is exploring big data and its applications. Key findings already suggest that the use of big data is uncertain outside of commercial marketing processes, although the idea of accessing live data in real time has huge potential because of its potential to facilitate almost real-time decision-making.  However, an overarching point coming out of the initial survey was the critical role that confidence in these processes plays (i.e. public engagement/involvement with these processes has to be both reflective and safe).

Key areas that the respondents felt required attention in relation to this issue are: the timeliness, representativeness and reliability of data; the potential for bias; and, as things stand, the fact that there is little evidence on exactly how the data might inform policy decisions.  Despite the current very positive rhetoric regarding the value of data, experts warned there is an important role to play in urging a more profound examination of the implications – acknowledging the complexity rather than advocating simple provision and acceptance of solutions.

The responses from experts to questions in this domain overlapped significantly with the responses received in relation to the governance and security domain, because much of the promise associated with different forms of data and its analysis are still at an early stage. Some of the more pressing questions demanding attention were stated to be:

  • Who owns the data (e.g. governments, technology companies or multinational corporations)?
  • Who has control of the data produced?
  • Does ownership change when new data sets are formed through the merging of data sets?
  • Do the individuals who provided the data have any control or do all internet users end up (unwittingly) becoming the unpaid labour that produces free data for public or private companies?

As a priority, thought must be given to the extent that individuals either have an active role in deciding what to trust or if it is safe to leave this to a third party who acts as a guide or trusted partner in the process.   Digital literacy and awareness on how criteria are decided in this form of decision-making were thought to be in short supply, necessitating rigorous quantitative and qualitative investigation to work through all the possible implications.

There was little comment about data representation.  This is surprising given the importance of data representation.  This of course is not a new issue and attention has already been paid to the value that data analysis and visualisation brings and whether research methodologies change or adapt when studying web based data.

In line with the other domains that form the basis of this study, experts have raised the issue of overlap with other domains; but the emphasis that has been given to the security, privacy, gate-keeping, curation, etc. of data has led some to go further and suggest that the match and overlap with governance and security is so pervasive that the two domains might be usefully merged.  We would be interested in your views and any research you are involved in that illustrates commonality or otherwise between data and governance.  The Ways of Being Digital website is one of many ways of keeping up to date with progress and letting us know of any relevant research projects that help to answer the ESRC scoping questions.

Health and Wellbeing: Does Digital Technology make us Fitter, Happier and more Productive?

The scoping question proposed by the ESRC for the health and wellbeing domain was, “Does technology makes us healthier, better educated and more productive?

When put to experts as part of our initial Delphi survey, responses often suggested that many of the negative impacts of digital technology on health and wellbeing were more usually generated in other domain areas – an example being the economy and sustainability domain, where the dual effects of automation and globalisation have brought about feelings of insecurity and anxiety, which can often lead to chronic health problems. However, experts also saw technological developments as key in combatting global health challenges such as chronic illness and ageing populations. Learning to use the potential of health technologies and minimising any negative impacts becomes particularly significant when set against global ageing challenges. However, the benefits and any drawbacks from the introduction of, for example, telehealth are not yet clear. As one expert put it, “There’s a difference between whether it does and whether it can and/or could”. (This ESRC event on Older People’s Health and Wellbeing is exploring precisely this type of issue.)

Health and wellbeing is determined by a complex mix of factors including income, housing, employment, education, lifestyle, and access to health care and other services, which results in significant inequalities between individuals and different groups in society. Experts suggest that digital tools and services might be usefully added to this list and/or that digital health inequalities should be investigated in their own right.

Off-setting the need for specificity, experts also raised the requirement for more understanding of the broader consequences of the intensification of technology in our lives (e.g. the quantified self, access to information, health monitoring, remote working practices and syndromes such as digital burnout). An approach is the work being undertaken in developing open source software for practitioners to develop specific wellbeing interventions in terms of behaviour change – an example being the EPSRC project ubehave. The uncertainty about how best to use digital technology in healthcare provision is also felt in the commercial sector, with industry needing more research and policy guidelines for developing technology that can be mainstreamed into healthcare services.

There is an overall concern that the ongoing technological innovation in healthcare requires attention to be paid to how new technologies are embedded in care and a better understanding of how digitally supported and digitally provided care compares with traditional forms of care given by humans. Concerns raised by experts also extend into patient confidentiality. As one expert put it, “How can we ensure the privacy and security of health data?

There is great potential for digital technology to support programmes of wellbeing and healthcare. The capturing and sharing of dynamic health data across networks has the potential to enable better, joined up services; whilst in the context of increased demand for healthcare, digitised health products hold out the promise of doing more for less.  However, as yet there is little evidence about how digital technologies actually support healthcare and little is known about the benefits of self-monitoring and whether there could be variation in the way that people interpret that data.  Another under-researched area is thought to be around anecdotal perceptions of the harm that digital technologies have on wellbeing and how these might be addressed.  Comments also emphasised that specific aspects around learning and education need attention, such as the cognitive effects of multitasking, the effects of digital technologies on reading, and on child development inequalities.

It was also proposed that our ESRC question might be better phrased as, “How is digital technology associated with health, education and productivity?” to ensure we capture both positive and negative aspects.

Economy and Sustainability: How can we Construct a Digital Economy that is Open to All?

In our initial Delphi survey, experts were asked about what impacts automation of the future workforce might bring and how we can construct a digital economy that it is open to all, sustainable and secure.

The brave new world (or some might say dystopia) that can be brought about through the automation of jobs is beginning to stand out as one of the biggest challenges facing society, governments and individuals.  Worklessness, underemployment and low-level employment have become the undesirable features of developed economies.  The introduction of technology has attracted much of the blame for this situation but are insecure jobs, stalled wages and living standards the result of automation, robotics and artificial intelligence or because just about everything is being digitised? Furthermore, should we sidestep old debates about technology replacing jobs so we can focus on working out exactly who is losing their jobs, who is working for nothing, who is working in the sharing economy or the gig economy and how this affects their sense of belonging and self-worth, and importantly how we can avoid the adoption of populist/nationalist discourses?

Experts raise the issue that many people who traditionally had working class jobs have now lost or are at risk of losing that work, being demoted to casual work or long-term unemployment.   If more and more people are pushed into this so-called underclass, then this creates a danger to social cohesiveness. To an extent, this has already occurred:  new forms of labour and work are automated or outsourced to economies where terms and conditions are nowhere close to the countries whose needs they serve. We were told that much can be learnt from the US-Mexico border area, where there is a massive rise in Maquiladoras: manufacturing operations where materials and equipment is imported on a duty-free and tariff-free basis for assembly and processing into finished products for export, often straight back to the raw materials’ country of origin. Understanding the precise role of digital technology in the creation of undesirable economic effects or otherwise is particularly crucial where excluded workers with poor terms and conditions are serving the needs of wealthier neoliberal economies.

Robots are back in the news, although they have been having a massive impact on productivity since they were introduced way back in the seventies. With the introduction of new technologies such as cloud computing, the ‘Internet of Things’ and artificial intelligence, robots have reached a stage of development that promises to make products nearer to market – while the introduction of more and more robots will mean that labour costs are no longer the chief driver of location decisions.  Reshoring manufacturing might be good news but it is unlikely to create many jobs, and because robots are essentially ICT systems, these systems are also compromising jobs undertaken by white collar workers and knowledge workers.

However, the ubiquitous nature of technologies and what they can do has focused employers’ attention on what machines find more difficult, e.g. the softer human skills that complement technological advances and which are increasingly seen in all sectors as highly valuable. Will a digitised economy remove drudge jobs and free up more time for leisure? Can digital make untapped value accessible (see this recent ESRC study)?

New digital platform based business models tend to be disruptive, supplanting traditional ways of working, bringing change and new ways of working that are typically reported negatively. Despite this, there are many benefits and experts would like to see attention given to:

  • What exactly are the main threats to the economy and sustainability in the digital age?
  • How can these threats be mitigated?
  • What are the emerging issues and related challenges for economic theory and policy?
  • What are the roles of activism, trade unions, charities, and the third sector in the negotiation of new economic models and realities?

Experts raise the issue of blurring and overlap between this and other domains because the digital divide, security, the cloud, data storage and environmental sustainability all feature strongly in this domain. We were also reminded that continuous attention must be paid to developing digital education: the learning and skills that are already fundamental to new digital ways of working.  In alignment with skills development, continued attention and investment must be given to ensure that physical digital infrastructure (including high speed broadband, 5G, and even satellites) is up to the job.

In summary, more questions than answers have been raised here. We would be grateful for your views on which questions should be prioritised. Further, if you are involved in research that throws light and understanding on any of the issues raised here, or others, then let us know.  Find out how to contact us here.

‘Fake news’

Over the past few months “fake news” has itself been a focus of much serious news coverage.  Separate from the implications this has for journalism and the role of news media, what else might be driving the current moral panic over this phenomenon?  Fake news has been blamed for the recent Brexit and Trump election victories and there is evidence that it is being deliberately manufactured by political and state actors.

As I have argued in a recent article for the Conversation, the function of fake news as propaganda is not new.  This has been a feature of political communication throughout history.  The newness and part of the moral panic arises from the use of digital media – that is, from the spread of fake news through social media.  Ironically this is in part because social and digital media also make visible the ways in which fake news is created and spread.  They make it possible for those with time to track it down and identify it.  At the same time, most people do not do this and the speed and share-ability of social media therefore allows it to spread.

Though there are potential technological fixes for this – and the social media providers need to be brought out from behind their moral and ethical bolt hole of “we just provide the technology” in order to share some responsibility – in truth, this is a media literacy issue.  As I note in my Conversation article, the ability to spot and identify fake news is a function of prior exposure to various news sources, especially hard hews.  Taking the time to make assessments and check before sharing is an issue of communication ethics.  We therefore need to ensure that we are equipping our children and students with these skills.  We also need to ask ourselves how often we check before sharing – especially when we agree with the sentiment of the post.

P.S. Please share …